Jan 30 16:22:24 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 16:22:24 crc restorecon[4739]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:24 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:22:25 crc restorecon[4739]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:22:25 crc kubenswrapper[4766]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.821139 4766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828103 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828140 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828145 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828151 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828155 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828159 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828164 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828170 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828199 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828204 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828209 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828214 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828219 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828223 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828228 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828232 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828236 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828241 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828245 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828250 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828257 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828263 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828269 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828276 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828282 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828287 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828293 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828298 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828303 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828308 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828321 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828326 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828330 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828336 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828340 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828345 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828349 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828354 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828358 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828363 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828367 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828371 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828375 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828380 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828384 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828388 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828393 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828400 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828406 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828410 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828415 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828420 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828424 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828430 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828434 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828438 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828442 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828446 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828451 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828455 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828459 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828463 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828467 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828471 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828476 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828480 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828484 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828488 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828492 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828496 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.828500 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829362 4766 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829381 4766 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829389 4766 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829396 4766 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829403 4766 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829409 4766 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829415 4766 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829421 4766 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829427 4766 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829432 4766 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829437 4766 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829444 4766 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829449 4766 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829454 4766 flags.go:64] FLAG: --cgroup-root="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829459 4766 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829464 4766 flags.go:64] FLAG: --client-ca-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829469 4766 flags.go:64] FLAG: --cloud-config="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829474 4766 flags.go:64] FLAG: --cloud-provider="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829478 4766 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829486 4766 flags.go:64] FLAG: --cluster-domain="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829491 4766 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829498 4766 flags.go:64] FLAG: --config-dir="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829503 4766 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829509 4766 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829515 4766 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829520 4766 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829525 4766 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829530 4766 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829535 4766 flags.go:64] FLAG: --contention-profiling="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829540 4766 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829545 4766 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829550 4766 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829555 4766 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829561 4766 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829566 4766 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829571 4766 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829576 4766 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829581 4766 flags.go:64] FLAG: --enable-server="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829585 4766 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829592 4766 flags.go:64] FLAG: --event-burst="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829597 4766 flags.go:64] FLAG: --event-qps="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829602 4766 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829606 4766 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829611 4766 flags.go:64] FLAG: --eviction-hard="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829617 4766 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829622 4766 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829628 4766 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829633 4766 flags.go:64] FLAG: --eviction-soft="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829637 4766 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829642 4766 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829647 4766 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829652 4766 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829656 4766 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829663 4766 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829668 4766 flags.go:64] FLAG: --feature-gates="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829674 4766 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829679 4766 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829684 4766 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829689 4766 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829695 4766 flags.go:64] FLAG: --healthz-port="10248" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829701 4766 flags.go:64] FLAG: --help="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829707 4766 flags.go:64] FLAG: --hostname-override="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829712 4766 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829717 4766 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829722 4766 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829727 4766 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829731 4766 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829736 4766 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829741 4766 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829746 4766 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829750 4766 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829755 4766 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829761 4766 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829765 4766 flags.go:64] FLAG: --kube-reserved="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829770 4766 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829775 4766 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829780 4766 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829785 4766 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829790 4766 flags.go:64] FLAG: --lock-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829795 4766 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829800 4766 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829805 4766 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829821 4766 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829826 4766 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829830 4766 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829836 4766 flags.go:64] FLAG: --logging-format="text" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829842 4766 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829847 4766 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829852 4766 flags.go:64] FLAG: --manifest-url="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829857 4766 flags.go:64] FLAG: --manifest-url-header="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829864 4766 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829869 4766 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829875 4766 flags.go:64] FLAG: --max-pods="110" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829880 4766 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829885 4766 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829890 4766 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829895 4766 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829900 4766 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829905 4766 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829910 4766 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829923 4766 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829928 4766 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829933 4766 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829939 4766 flags.go:64] FLAG: --pod-cidr="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829943 4766 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829952 4766 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829956 4766 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829961 4766 flags.go:64] FLAG: --pods-per-core="0" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829966 4766 flags.go:64] FLAG: --port="10250" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829971 4766 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829975 4766 flags.go:64] FLAG: --provider-id="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829980 4766 flags.go:64] FLAG: --qos-reserved="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829985 4766 flags.go:64] FLAG: --read-only-port="10255" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829989 4766 flags.go:64] FLAG: --register-node="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829994 4766 flags.go:64] FLAG: --register-schedulable="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.829999 4766 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830008 4766 flags.go:64] FLAG: --registry-burst="10" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830013 4766 flags.go:64] FLAG: --registry-qps="5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830020 4766 flags.go:64] FLAG: --reserved-cpus="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830025 4766 flags.go:64] FLAG: --reserved-memory="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830031 4766 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830036 4766 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830041 4766 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830046 4766 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830050 4766 flags.go:64] FLAG: --runonce="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830055 4766 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830060 4766 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830065 4766 flags.go:64] FLAG: --seccomp-default="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830070 4766 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830074 4766 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830079 4766 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830084 4766 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830089 4766 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830094 4766 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830100 4766 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830104 4766 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830109 4766 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830114 4766 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830119 4766 flags.go:64] FLAG: --system-cgroups="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830124 4766 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830132 4766 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830137 4766 flags.go:64] FLAG: --tls-cert-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830142 4766 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830149 4766 flags.go:64] FLAG: --tls-min-version="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830155 4766 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830160 4766 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830165 4766 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830170 4766 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830198 4766 flags.go:64] FLAG: --v="2" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830205 4766 flags.go:64] FLAG: --version="false" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830215 4766 flags.go:64] FLAG: --vmodule="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830221 4766 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830227 4766 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830377 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830385 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830391 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830398 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830402 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830407 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830411 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830416 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830421 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830425 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830429 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830433 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830438 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830442 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830446 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830450 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830454 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830458 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830463 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830467 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830471 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830475 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830479 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830483 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830489 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830494 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830499 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830505 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830509 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830514 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830519 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830523 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830527 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830531 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830535 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830539 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830544 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830550 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830555 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830560 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830565 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830569 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830574 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830579 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830583 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830588 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830593 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830598 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830602 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830607 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830612 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830617 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830622 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830626 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830631 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830636 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830641 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830645 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830649 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830653 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830657 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830662 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830667 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830671 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830675 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830679 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830684 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830688 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830692 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830696 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.830701 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.830718 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838244 4766 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838278 4766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838366 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838376 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838381 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838386 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838391 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838397 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838402 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838406 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838411 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838416 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838420 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838425 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838429 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838434 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838438 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838443 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838447 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838452 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838456 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838461 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838465 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838472 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838478 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838483 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838487 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838492 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838497 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838503 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838511 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838517 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838523 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838529 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838535 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838541 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838547 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838552 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838556 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838561 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838566 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838570 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838574 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838579 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838583 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838587 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838592 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838596 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838601 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838605 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838610 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838614 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838619 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838623 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838628 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838633 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838637 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838642 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838646 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838651 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838656 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838661 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838666 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838671 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838675 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838679 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838685 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838690 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838696 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838701 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838707 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838712 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838716 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.838726 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838872 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838881 4766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838886 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838892 4766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838897 4766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838901 4766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838907 4766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838911 4766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838917 4766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838922 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838927 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838933 4766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838938 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838944 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838949 4766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838953 4766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838959 4766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838963 4766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838968 4766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838973 4766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838978 4766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838983 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838987 4766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838992 4766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.838996 4766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839001 4766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839005 4766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839010 4766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839016 4766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839020 4766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839025 4766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839029 4766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839034 4766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839038 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839043 4766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839048 4766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839052 4766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839057 4766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839063 4766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839069 4766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839075 4766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839079 4766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839084 4766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839089 4766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839094 4766 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839099 4766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839105 4766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839111 4766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839115 4766 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839120 4766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839126 4766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839132 4766 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839137 4766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839142 4766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839146 4766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839151 4766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839155 4766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839160 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839165 4766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839169 4766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839177 4766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839196 4766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839202 4766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839206 4766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839213 4766 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839218 4766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839257 4766 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839261 4766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839266 4766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839270 4766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.839275 4766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.839282 4766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.839477 4766 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.844140 4766 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.844255 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.845646 4766 server.go:997] "Starting client certificate rotation" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.845681 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.847509 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 01:41:46.368209583 +0000 UTC Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.847607 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.871099 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.874029 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.874223 4766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.892638 4766 log.go:25] "Validated CRI v1 runtime API" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.929728 4766 log.go:25] "Validated CRI v1 image API" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.931625 4766 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.936359 4766 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-16-17-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.936407 4766 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.952797 4766 manager.go:217] Machine: {Timestamp:2026-01-30 16:22:25.950067 +0000 UTC m=+0.588024356 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a00817eb-12ea-49e2-ab4d-6ba5164a8361 BootID:6a40bef8-b5e4-4d79-9bcd-48caff34a744 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:05:5e:29 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:05:5e:29 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:47:29 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f2:1b:00 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:28:e8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f0:b2:36 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:29:2a:0a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:47:a4:70:71:3f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:30:60:3a:f8:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953050 4766 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953332 4766 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953651 4766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953834 4766 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.953874 4766 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.954136 4766 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.954146 4766 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955114 4766 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955150 4766 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955749 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.955850 4766 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962552 4766 kubelet.go:418] "Attempting to sync node with API server" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962587 4766 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962614 4766 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962631 4766 kubelet.go:324] "Adding apiserver pod source" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.962644 4766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.967481 4766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.968364 4766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.970002 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.970124 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.970411 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.970479 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.970913 4766 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972792 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972848 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972863 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972875 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972897 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972911 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972924 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972946 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972960 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.972974 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.973017 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.973031 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.974204 4766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.974788 4766 server.go:1280] "Started kubelet" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975733 4766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975827 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.975737 4766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:22:25 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.976663 4766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981014 4766 server.go:460] "Adding debug handlers to kubelet server" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981473 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.981509 4766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.982534 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:21:32.225467725 +0000 UTC Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.982791 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984246 4766 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984270 4766 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.984397 4766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.984493 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Jan 30 16:22:25 crc kubenswrapper[4766]: W0130 16:22:25.985024 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.985092 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.985492 4766 factory.go:55] Registering systemd factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.985533 4766 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:22:25 crc kubenswrapper[4766]: E0130 16:22:25.984671 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8ec2d1cfd9cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:22:25.974753739 +0000 UTC m=+0.612711105,LastTimestamp:2026-01-30 16:22:25.974753739 +0000 UTC m=+0.612711105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986673 4766 factory.go:153] Registering CRI-O factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986713 4766 factory.go:221] Registration of the crio container factory successfully Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986793 4766 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986831 4766 factory.go:103] Registering Raw factory Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.986848 4766 manager.go:1196] Started watching for new ooms in manager Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.988642 4766 manager.go:319] Starting recovery of all containers Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.992941 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993066 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993089 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993109 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993129 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993149 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993171 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993219 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993242 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993263 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993282 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993303 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993355 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993385 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.993461 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.994317 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995290 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995328 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995342 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995360 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995373 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995387 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995400 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995423 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995440 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995459 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995478 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995493 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995531 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995549 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995564 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995577 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995593 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995607 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995620 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995644 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995664 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995678 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995693 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995713 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995727 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995771 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995788 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995807 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995825 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995838 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995851 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995866 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995882 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995896 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995908 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995921 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995940 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995955 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995970 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.995990 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996005 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996018 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996031 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996042 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996055 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996099 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996113 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996126 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996140 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996153 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996165 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996196 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996210 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996222 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996235 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996279 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996293 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996306 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996319 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996333 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996347 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996363 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996377 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996392 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996409 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996422 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996436 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996450 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996465 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996480 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996496 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996514 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996528 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996543 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996557 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996569 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996583 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996598 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996612 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996627 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996643 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996657 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996671 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996685 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996701 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996718 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996733 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996748 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996768 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996784 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996801 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996816 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996832 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996846 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996860 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996875 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996892 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996907 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996922 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996937 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996953 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996968 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996982 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.996997 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997012 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997027 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997042 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997056 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997068 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997081 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997093 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997107 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997122 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997135 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997148 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997162 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997222 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997240 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997252 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997264 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997278 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997292 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997307 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997323 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997336 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997353 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997367 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997380 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997395 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997409 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997422 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997435 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997449 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997465 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997478 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997493 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997507 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997520 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997533 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997547 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997560 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997575 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997589 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997602 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997617 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997634 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997649 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997664 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997678 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997692 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997705 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997719 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997732 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997745 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997761 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997776 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997788 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 16:22:25 crc kubenswrapper[4766]: I0130 16:22:25.997803 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997817 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997830 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997846 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997861 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997875 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997888 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997902 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997916 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997929 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997942 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997956 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997971 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.997987 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998000 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998013 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998026 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998040 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998053 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998068 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998081 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:25.998105 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.006676 4766 manager.go:324] Recovery completed Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010284 4766 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010375 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010421 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.010458 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011256 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011553 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011593 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011614 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011637 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011656 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011676 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011695 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011713 4766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011730 4766 reconstruct.go:97] "Volume reconstruction finished" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.011743 4766 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.019067 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.020777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022335 4766 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022356 4766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.022375 4766 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.035824 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038064 4766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038103 4766 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.038132 4766 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.038293 4766 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.039228 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.039370 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.045319 4766 policy_none.go:49] "None policy: Start" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.046249 4766 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.046281 4766 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.083591 4766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105441 4766 manager.go:334] "Starting Device Plugin manager" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105656 4766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.105668 4766 server.go:79] "Starting device plugin registration server" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106095 4766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106107 4766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106442 4766 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106545 4766 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.106554 4766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.112893 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.138349 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.138405 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139432 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.139713 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.140604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141198 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141341 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.141476 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142207 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142332 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.142423 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143145 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143325 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.143910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144370 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.144658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.145129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.185292 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.206492 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.207737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.208495 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.209172 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216666 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216686 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216718 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216756 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216771 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216794 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216904 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216922 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.216980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.217032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318688 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.318954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319168 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319248 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319276 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319297 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319313 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319335 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319261 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319318 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.319456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.410119 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.411981 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.412508 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.471020 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.480081 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.495906 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.506767 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.511689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.528597 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48 WatchSource:0}: Error finding container 4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48: Status 404 returned error can't find the container with id 4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.530002 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818 WatchSource:0}: Error finding container 095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818: Status 404 returned error can't find the container with id 095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.535659 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5 WatchSource:0}: Error finding container 5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5: Status 404 returned error can't find the container with id 5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5 Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.541032 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f WatchSource:0}: Error finding container 9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f: Status 404 returned error can't find the container with id 9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.586878 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.813076 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.815567 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.816168 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:26 crc kubenswrapper[4766]: W0130 16:22:26.852794 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: E0130 16:22:26.852906 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.977021 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:26 crc kubenswrapper[4766]: I0130 16:22:26.983053 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:47:56.313399336 +0000 UTC Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.021104 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.021212 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.044158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4df0f7675be841bebdfa274a2f03a26d63afa96fc634b3b5e9d8424c47c16e48"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.045224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9df72ce5f537faef1832bd3204b9414467b59ca06fc5b69984500b878b6cb39f"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.046265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"585d4a4004f6a9bb513d5de66744c5230d2b3386db687e9ff734ea5afdb49052"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.047208 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b062338226e13b32f8132bf809903abe514df97859c5efe75f985b9fb1b8ec5"} Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.048243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"095f111d6f0c6efcdab70a8646c8a3ab93611cc7da0f19b2292794a74e109818"} Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.302432 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.302529 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.387717 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Jan 30 16:22:27 crc kubenswrapper[4766]: W0130 16:22:27.584665 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.584826 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.616395 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.618504 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.618981 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.937619 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:27 crc kubenswrapper[4766]: E0130 16:22:27.939025 4766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.977450 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:27 crc kubenswrapper[4766]: I0130 16:22:27.983760 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:38:32.042409131 +0000 UTC Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052542 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.052676 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054288 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054356 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.054494 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.058141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060020 4766 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060377 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.060377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.063316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064291 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" exitCode=0 Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.064563 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.065664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068011 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068059 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068062 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38"} Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.068369 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.069559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:28 crc kubenswrapper[4766]: W0130 16:22:28.557198 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:28 crc kubenswrapper[4766]: E0130 16:22:28.557319 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.977107 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:28 crc kubenswrapper[4766]: I0130 16:22:28.984246 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:56:22.548172202 +0000 UTC Jan 30 16:22:28 crc kubenswrapper[4766]: E0130 16:22:28.989369 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073025 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073071 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073075 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.073956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.076470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078578 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d" exitCode=0 Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.078917 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.086370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab"} Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089290 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.089354 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.090580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.091235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.219904 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.221422 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:29 crc kubenswrapper[4766]: E0130 16:22:29.221907 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.264605 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:29 crc kubenswrapper[4766]: W0130 16:22:29.455462 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Jan 30 16:22:29 crc kubenswrapper[4766]: E0130 16:22:29.455590 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:22:29 crc kubenswrapper[4766]: I0130 16:22:29.985309 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:54:39.061625283 +0000 UTC Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.096289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036"} Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.096404 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.097585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099058 4766 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac" exitCode=0 Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac"} Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099237 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099306 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099312 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.099309 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.100871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.101816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:30 crc kubenswrapper[4766]: I0130 16:22:30.985889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:01:55.037436576 +0000 UTC Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.024632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"374cae6e4bbbb88f2f6fc9093a4f5597b2afeae8361a9a76ccf384cae5d8b2b3"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106701 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"332a9a9c49123e23601444adafca95852030d0e19a682316100bc45b0f849209"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bfcc8c946ea5c547539386c797026307ba8bd235fd4694341695882ec2442702"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"108f1ca5a7cf1c4f0665b5b82b00c8b911dfe22582334836d3bc8a5afe17a1c6"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f01fb269c6fb534b4e45e60f3409c21e9700bc901eda3f975e990f77a9286838"} Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106784 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106823 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106786 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106862 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.106782 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.108405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.109314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:31 crc kubenswrapper[4766]: I0130 16:22:31.986701 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:37:00.940794838 +0000 UTC Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.069287 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.109070 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.109101 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.110502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.399770 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.399929 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.401254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.422937 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.424581 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:32 crc kubenswrapper[4766]: I0130 16:22:32.582715 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:32.987680 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:12:26.0525009 +0000 UTC Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.111275 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.112233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.314650 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.314819 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.315922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.316008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.316031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.988056 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:42:21.867496807 +0000 UTC Jan 30 16:22:33 crc kubenswrapper[4766]: I0130 16:22:33.992436 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.025273 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.025366 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.113589 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.114702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:34 crc kubenswrapper[4766]: I0130 16:22:34.988896 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:56:16.36936635 +0000 UTC Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.905281 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.905498 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.906600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.911288 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:35 crc kubenswrapper[4766]: I0130 16:22:35.989472 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:13:20.847315973 +0000 UTC Jan 30 16:22:36 crc kubenswrapper[4766]: E0130 16:22:36.113029 4766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117023 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.117856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:36 crc kubenswrapper[4766]: I0130 16:22:36.990195 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:16:23.748273929 +0000 UTC Jan 30 16:22:37 crc kubenswrapper[4766]: I0130 16:22:37.991125 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:20:39.167917108 +0000 UTC Jan 30 16:22:38 crc kubenswrapper[4766]: I0130 16:22:38.992284 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:41:08.183008527 +0000 UTC Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.268293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.268404 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.269480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.977426 4766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:22:39 crc kubenswrapper[4766]: I0130 16:22:39.993118 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:09:24.394457605 +0000 UTC Jan 30 16:22:40 crc kubenswrapper[4766]: W0130 16:22:40.086795 4766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.087104 4766 trace.go:236] Trace[1612209459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:30.085) (total time: 10001ms): Jan 30 16:22:40 crc kubenswrapper[4766]: Trace[1612209459]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:22:40.086) Jan 30 16:22:40 crc kubenswrapper[4766]: Trace[1612209459]: [10.0016397s] [10.0016397s] END Jan 30 16:22:40 crc kubenswrapper[4766]: E0130 16:22:40.087130 4766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.123083 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.123159 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.126350 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128060 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" exitCode=255 Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036"} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.128288 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.129552 4766 scope.go:117] "RemoveContainer" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.134277 4766 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.134339 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.566112 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.566366 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.567962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.603978 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 16:22:40 crc kubenswrapper[4766]: I0130 16:22:40.993532 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:30:49.133333254 +0000 UTC Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.133104 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.135651 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1"} Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.135803 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136026 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.136606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.137446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.150967 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 16:22:41 crc kubenswrapper[4766]: I0130 16:22:41.993658 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:19:42.166037136 +0000 UTC Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.137852 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.138642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:42 crc kubenswrapper[4766]: I0130 16:22:42.994484 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:52:14.64602606 +0000 UTC Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.995414 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:08:59.119691219 +0000 UTC Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.996981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.997147 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.997226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:43 crc kubenswrapper[4766]: I0130 16:22:43.998211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.000608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.025753 4766 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.025836 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.142624 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.143498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.604381 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.973672 4766 apiserver.go:52] "Watching apiserver" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979257 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979530 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.979978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980036 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980238 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980276 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.980340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980386 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:44 crc kubenswrapper[4766]: E0130 16:22:44.980522 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982570 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982603 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982570 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.982579 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983156 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983164 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.983170 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.985688 4766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.987171 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:22:44 crc kubenswrapper[4766]: I0130 16:22:44.996002 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:20:49.656259248 +0000 UTC Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.005845 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.021612 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.031867 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.046664 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.056457 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.065415 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.078072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.114001 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116908 4766 trace.go:236] Trace[1293579205]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:32.685) (total time: 12431ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[1293579205]: ---"Objects listed" error: 12431ms (16:22:45.116) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[1293579205]: [12.43141281s] [12.43141281s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116952 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.116926 4766 trace.go:236] Trace[333219771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:30.769) (total time: 14346ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[333219771]: ---"Objects listed" error: 14346ms (16:22:45.116) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[333219771]: [14.346862219s] [14.346862219s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117049 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117224 4766 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117717 4766 trace.go:236] Trace[188495366]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:22:33.854) (total time: 11263ms): Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[188495366]: ---"Objects listed" error: 11263ms (16:22:45.117) Jan 30 16:22:45 crc kubenswrapper[4766]: Trace[188495366]: [11.263564895s] [11.263564895s] END Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.117738 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.118375 4766 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.124891 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.148899 4766 csr.go:261] certificate signing request csr-sffz8 is approved, waiting to be issued Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.157340 4766 csr.go:257] certificate signing request csr-sffz8 is issued Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217789 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217913 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.217971 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218713 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218756 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.218872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219193 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219324 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219400 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219422 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219454 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219534 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219630 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219748 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219766 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219785 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219858 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219882 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219898 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219932 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219950 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219966 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.219997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220012 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220099 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220153 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220168 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220204 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220228 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220297 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220331 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220442 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220458 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220488 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220504 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220628 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220661 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220678 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220738 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220755 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220775 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220794 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220831 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220959 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.220981 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221005 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221094 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221112 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221154 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221247 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221272 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221296 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221314 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221329 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221346 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221476 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221541 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221556 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221570 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221588 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221602 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221634 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221650 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221727 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.221999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222020 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222073 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222089 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222105 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222121 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222160 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222203 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222312 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222383 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222409 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222435 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222459 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222481 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222530 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222589 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222691 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222771 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222830 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222847 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.222920 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.722895294 +0000 UTC m=+20.360852730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.222988 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223056 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223068 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223090 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223149 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223203 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223295 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223304 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223351 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223377 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223431 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223583 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223615 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223771 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223848 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223915 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223931 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223945 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223959 4766 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223973 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223985 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223999 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224011 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224025 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224089 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224102 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224117 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224130 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224143 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224155 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224167 4766 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224199 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224217 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224230 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224243 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229381 4766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.245331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.273671 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.264604 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223360 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223417 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223662 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223839 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.223941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224121 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224155 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224245 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224499 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224536 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224648 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.224826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225001 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225010 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225125 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.225533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227819 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.227963 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.228110 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228215 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228370 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228536 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228621 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.228983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229109 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.229164 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.236000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.238901 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.238932 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239229 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.239724 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.240473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.241752 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243844 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.243978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244197 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.244547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.244940 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.234240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.245484 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.246904 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.247372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.247699 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.248230 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.250350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.250609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251698 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.251764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252261 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252499 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252583 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.252957 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253588 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253753 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253858 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286204 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286341 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.286930 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287235 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.287697 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288057 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.288518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289232 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.289783 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.290607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.253764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.263663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.264511 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.290784 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.290807 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271082 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271712 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.271928 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.272171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.274251 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.276411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.276757 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277060 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277083 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.277563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278004 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278103 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278521 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.278551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.280302 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281628 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281795 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.281978 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.282442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.282877 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.283076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.283445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.284322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.285190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291094 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791067212 +0000 UTC m=+20.429024558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291373 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291442 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291492 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291512 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.291592 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791575246 +0000 UTC m=+20.429532592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291606 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291658 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-flxfz"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.291722 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.308484 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.292507 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.791739981 +0000 UTC m=+20.429697327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.292886 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293151 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293871 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.292854 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294326 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.294576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295122 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295301 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.295954 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.296065 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.304211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.304338 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306457 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.306660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.307833 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.308398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.308802 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:45.8087791 +0000 UTC m=+20.446736446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293668 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.293460 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vhmx5"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.315750 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316009 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316045 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.316312 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.318875 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320040 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320131 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.320542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321440 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321643 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.321998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.322167 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.323277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.330576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.331683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332341 4766 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332363 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332376 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332389 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332400 4766 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332411 4766 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332422 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332433 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332444 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332454 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332469 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332479 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332489 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332501 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332514 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332526 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332541 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332552 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332563 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332575 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332587 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332599 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332611 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332622 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332634 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332645 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332657 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332668 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332680 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332693 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332705 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332718 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332730 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332743 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332757 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332768 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332780 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332792 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332803 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332813 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332825 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332836 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332849 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332860 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332871 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332883 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332894 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332907 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332918 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332930 4766 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332941 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332952 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332964 4766 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332976 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.332988 4766 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333001 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333013 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333026 4766 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333036 4766 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333064 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333076 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333086 4766 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333098 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333108 4766 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333121 4766 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333132 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333143 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333154 4766 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333164 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333206 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333220 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333231 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333242 4766 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333254 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333264 4766 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333275 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333287 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333298 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333308 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333319 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333329 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333342 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333354 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333365 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333376 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333387 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333401 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333412 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333424 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333434 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333447 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333457 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333468 4766 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333479 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333491 4766 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333503 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333514 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333526 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333538 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333549 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333558 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333567 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333575 4766 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333585 4766 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333594 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333603 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333611 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333619 4766 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333630 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333640 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333651 4766 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333661 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333672 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333682 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333696 4766 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333709 4766 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333722 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333732 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333743 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333753 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333766 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333777 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333789 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333800 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333811 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333822 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333832 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333842 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333855 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333869 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333879 4766 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333889 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333899 4766 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333910 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333927 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333937 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333947 4766 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333957 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333967 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333979 4766 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333989 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.333999 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334010 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334021 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334033 4766 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334043 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334053 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334063 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334073 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334083 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334093 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334103 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334113 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334124 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334134 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334144 4766 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334154 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.334166 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335609 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335632 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335645 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.335928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.338031 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339739 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339886 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.339995 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.340107 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.345790 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.346690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.348048 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.351509 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.352946 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.357079 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.366416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.368961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.383305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.386943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.388466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436833 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.436920 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437024 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437103 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437190 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437275 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437347 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437414 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437480 4766 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437549 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437626 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437694 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437763 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.437833 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.458402 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.506469 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.522229 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.533300 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538204 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-host\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538646 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.538821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bd09169-41b7-4eb3-80a5-a842e79f7d94-hosts-file\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.539491 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-serviceca\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.544027 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.550640 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.557231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw6f\" (UniqueName: \"kubernetes.io/projected/8bd09169-41b7-4eb3-80a5-a842e79f7d94-kube-api-access-gnw6f\") pod \"node-resolver-flxfz\" (UID: \"8bd09169-41b7-4eb3-80a5-a842e79f7d94\") " pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.557953 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctb7v\" (UniqueName: \"kubernetes.io/projected/c3a8d75a-1f1e-416a-a96b-c774ffdc24b2-kube-api-access-ctb7v\") pod \"node-ca-vhmx5\" (UID: \"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\") " pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.565591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.574653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.585079 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.596358 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.596385 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.606572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.608758 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849 WatchSource:0}: Error finding container d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849: Status 404 returned error can't find the container with id d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.613934 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.618651 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1 WatchSource:0}: Error finding container ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1: Status 404 returned error can't find the container with id ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1 Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.629400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.643378 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.654018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flxfz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.656231 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vhmx5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.656297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.669439 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ddhn5"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.669788 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673395 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673425 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673550 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.673568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.674887 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.675316 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.684255 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.684890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vvzk9"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.686092 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.692938 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-l6xdr"] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.694361 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695060 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695279 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695481 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695362 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.695350 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.696614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.707637 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.707672 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717441 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.717766 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718139 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718303 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.718458 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.727406 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739702 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739887 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.739976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740064 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740306 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740373 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740448 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740866 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.740937 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741311 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741639 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742045 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.742064 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.742043823 +0000 UTC m=+21.380001219 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.741918 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742519 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742555 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742644 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742665 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742710 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742799 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.742904 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.764991 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.775910 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.785552 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.796564 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.810244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.821688 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.833613 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.841532 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.843984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844087 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844109 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0a25c516-3d8c-4fdb-9425-692ce650f427-rootfs\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844309 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844330 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-cnibin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844405 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-cnibin\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844415 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844538 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844564 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844579 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844584 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.844634 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.844615877 +0000 UTC m=+21.482573283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844658 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844720 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844776 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844866 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844896 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-cni-binary-copy\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-hostroot\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845030 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.844976 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-os-release\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845071 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845094 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845101 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845170 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845254 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845284 4766 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-k8s-cni-cncf-io\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-bin\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845467 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-cni-multus\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-conf-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845391 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-netns\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.845509 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845632 4766 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845651 4766 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845670 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845685 4766 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845693 4766 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845727 4766 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845632 4766 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845749 4766 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845767 4766 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845779 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845787 4766 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845804 4766 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845736 4766 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845660 4766 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845830 4766 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845834 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845753 4766 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845805 4766 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.845823 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g/status\": http2: client connection force closed via ClientConn.Close" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845861 4766 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845869 4766 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845852 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846053 4766 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845878 4766 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845884 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845893 4766 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845897 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845905 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845913 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845916 4766 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845928 4766 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-system-cni-dir\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-system-cni-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846287 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846311 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846328 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-run-multus-certs\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846345 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-os-release\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846290 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.846281173 +0000 UTC m=+21.484238519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846420 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3a74bc5e-af98-4849-820c-7056caabc485-multus-daemon-config\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846432 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.846410257 +0000 UTC m=+21.484367663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845939 4766 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845946 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845959 4766 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845972 4766 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846478 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845984 4766 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845818 4766 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846541 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.8465291 +0000 UTC m=+21.484486446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845753 4766 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846012 4766 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846033 4766 projected.go:194] Error preparing data for projected volume kube-api-access-dv5xn for pod openshift-multus/multus-additional-cni-plugins-vvzk9: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": write tcp 38.102.83.103:45762->38.102.83.103:6443: use of closed network connection Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.845928 4766 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-host-var-lib-kubelet\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-multus-socket-dir-parent\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.846672 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3a74bc5e-af98-4849-820c-7056caabc485-etc-kubernetes\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: W0130 16:22:45.846775 4766 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Jan 30 16:22:45 crc kubenswrapper[4766]: E0130 16:22:45.846891 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn podName:8da0c398-554f-47ad-aada-70e4b5c9ec98 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:46.346782417 +0000 UTC m=+20.984739853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dv5xn" (UniqueName: "kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn") pod "multus-additional-cni-plugins-vvzk9" (UID: "8da0c398-554f-47ad-aada-70e4b5c9ec98") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": write tcp 38.102.83.103:45762->38.102.83.103:6443: use of closed network connection Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.847145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a25c516-3d8c-4fdb-9425-692ce650f427-mcd-auth-proxy-config\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8da0c398-554f-47ad-aada-70e4b5c9ec98-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-binary-copy\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8da0c398-554f-47ad-aada-70e4b5c9ec98-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848734 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.848841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.853116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a25c516-3d8c-4fdb-9425-692ce650f427-proxy-tls\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.853780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.864381 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879190 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9kc\" (UniqueName: \"kubernetes.io/projected/0a25c516-3d8c-4fdb-9425-692ce650f427-kube-api-access-6s9kc\") pod \"machine-config-daemon-ddhn5\" (UID: \"0a25c516-3d8c-4fdb-9425-692ce650f427\") " pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879212 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25lp6\" (UniqueName: \"kubernetes.io/projected/3a74bc5e-af98-4849-820c-7056caabc485-kube-api-access-25lp6\") pod \"multus-l6xdr\" (UID: \"3a74bc5e-af98-4849-820c-7056caabc485\") " pod="openshift-multus/multus-l6xdr" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.879904 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"ovnkube-node-54ngm\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.886449 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.897283 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.908570 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.917390 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.927416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.938434 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.949670 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.961355 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.977435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.993612 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:22:45 crc kubenswrapper[4766]: I0130 16:22:45.996650 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:51:31.123706674 +0000 UTC Jan 30 16:22:46 crc kubenswrapper[4766]: W0130 16:22:46.003693 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a25c516_3d8c_4fdb_9425_692ce650f427.slice/crio-368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0 WatchSource:0}: Error finding container 368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0: Status 404 returned error can't find the container with id 368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.029478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.039443 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.039592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.044591 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.045428 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.046761 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.047523 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.048730 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.049525 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.050286 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.051457 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.052261 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.052483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-l6xdr" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.055367 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.055743 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.056103 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.057021 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.059692 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.060342 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.063126 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.063701 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.065720 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.066329 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.066904 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.069525 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.070720 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.071369 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.072577 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.073056 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.073809 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.076830 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.077498 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.079622 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.083685 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.085395 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.086295 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.086639 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.087280 4766 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.087400 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.089827 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.091119 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.091758 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.095072 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.096881 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.098201 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.100521 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.101400 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.102512 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.103274 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.104457 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.105403 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.109916 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.110806 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.112039 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.113934 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.115242 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.116196 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.116821 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.119842 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.120838 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.121863 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.125572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.148102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.148146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"368218924bf4b48531e7c0de2fc7c25d3580b39e016b0d38da73383e35fef3f0"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.151245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vhmx5" event={"ID":"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2","Type":"ContainerStarted","Data":"e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.151277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vhmx5" event={"ID":"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2","Type":"ContainerStarted","Data":"bcf6fab4871f54ca9be9d9fc2ac5a6250af7cf9558678c7a35c43165a466ecbd"} Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.152004 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a299e8_188d_4777_bb82_a0994feabcff.slice/crio-458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a299e8_188d_4777_bb82_a0994feabcff.slice/crio-conmon-458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.153571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ea6918573df502c85bf5c5765559e335385923375685c895d6c6d8da943d38a1"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.155858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.155962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d478013c2f2e56c511f41a063283cb549fbb00ee8820c8d4bd123af5f807a849"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.158965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ef229e024d3235ddfa3de93d2c9a064c5b96d1b262c193283b09b0981dfc0409"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.159025 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 16:17:45 +0000 UTC, rotation deadline is 2026-11-30 11:01:48.56817191 +0000 UTC Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.159062 4766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7290h39m2.409111813s for next certificate rotation Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.160889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"da64b2bf34b406c771c571dae893c26b44c0c80fc71584fafe8548d33fc5cbe3"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.165609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.167289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flxfz" event={"ID":"8bd09169-41b7-4eb3-80a5-a842e79f7d94","Type":"ContainerStarted","Data":"4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.167334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flxfz" event={"ID":"8bd09169-41b7-4eb3-80a5-a842e79f7d94","Type":"ContainerStarted","Data":"9f06270adae90c7d7bd6c122e885399dfe099c64dfb53fdba92e06b97f1fb78a"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.169287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.169327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"b7c7571b036dc1cbf0576f5638a00f9530f0e7ad9d69b4b12af59327bef5efe3"} Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.208773 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.242834 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.312435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.328170 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.350304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.395064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.406006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv5xn\" (UniqueName: \"kubernetes.io/projected/8da0c398-554f-47ad-aada-70e4b5c9ec98-kube-api-access-dv5xn\") pod \"multus-additional-cni-plugins-vvzk9\" (UID: \"8da0c398-554f-47ad-aada-70e4b5c9ec98\") " pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.435827 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.467442 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.520868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.557895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.586262 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.626267 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.639040 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" Jan 30 16:22:46 crc kubenswrapper[4766]: W0130 16:22:46.661909 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da0c398_554f_47ad_aada_70e4b5c9ec98.slice/crio-a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06 WatchSource:0}: Error finding container a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06: Status 404 returned error can't find the container with id a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.677202 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.680463 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.695551 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.742686 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.753705 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.753941 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.75392455 +0000 UTC m=+23.391881896 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.775737 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.805673 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.817497 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.836465 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855243 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.855265 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855319 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855299632 +0000 UTC m=+23.493256978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855355 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855364 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855383 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855402 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855410 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855392714 +0000 UTC m=+23.493350060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.855429325 +0000 UTC m=+23.493386741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855499 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855559 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855571 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: E0130 16:22:46.855631 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:48.85561534 +0000 UTC m=+23.493572676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.884791 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.919522 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.941384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.955144 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.975971 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:22:46 crc kubenswrapper[4766]: I0130 16:22:46.997354 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:55:12.565978706 +0000 UTC Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.016220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.036057 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.038952 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.038990 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:47 crc kubenswrapper[4766]: E0130 16:22:47.039053 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:47 crc kubenswrapper[4766]: E0130 16:22:47.039199 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.055069 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.076535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.109399 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.115890 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.136421 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.174062 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.175293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.175335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"a1b4aba2432644a382e15a810232a9c7825eeca7839a3a3c32e16ce0a8000c06"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.176286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.177007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179610 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" exitCode=0 Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179642 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179709 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.179730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.195485 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.216084 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.236216 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.255910 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.275227 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.311624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.315760 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.335968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.356054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.375730 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.400536 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.416430 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.436074 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.476514 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.495947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.517380 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.535653 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.556893 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.595493 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.597640 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.615855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.636938 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.664737 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.704588 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.743769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.787143 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.824098 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.863340 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.906344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.943984 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.986727 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:47 crc kubenswrapper[4766]: I0130 16:22:47.998085 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:10:31.397162052 +0000 UTC Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.024791 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.038578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.038726 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.063231 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.104123 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.142815 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.183716 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8" exitCode=0 Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.183779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8"} Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.185916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120"} Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.193762 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.232856 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.267077 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.303219 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.349414 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.387311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.425200 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.465863 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.503491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.543367 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.584461 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.623819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.668005 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.704933 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.744505 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.774111 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.774294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.774273781 +0000 UTC m=+27.412231127 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.783109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.875417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875446 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875518 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875501609 +0000 UTC m=+27.513458955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875533 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875537 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875552 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875565 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875569 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875591 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875599 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875582431 +0000 UTC m=+27.513539777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875603 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875617 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875609272 +0000 UTC m=+27.513566728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: E0130 16:22:48.875642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:22:52.875630142 +0000 UTC m=+27.513587488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:48 crc kubenswrapper[4766]: I0130 16:22:48.998699 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:10:34.807219361 +0000 UTC Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.039308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.039331 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:49 crc kubenswrapper[4766]: E0130 16:22:49.039452 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:49 crc kubenswrapper[4766]: E0130 16:22:49.040071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.192664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.194957 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47" exitCode=0 Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.195148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47"} Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.212301 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.232384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.252384 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.265600 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.276297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.287160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.300314 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.316374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.328591 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.339228 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.354618 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.367738 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.379539 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:49 crc kubenswrapper[4766]: I0130 16:22:49.999353 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:28:58.711856011 +0000 UTC Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.038858 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:50 crc kubenswrapper[4766]: E0130 16:22:50.038987 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.200313 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65" exitCode=0 Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.200359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65"} Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.217605 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.229607 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.247153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.258923 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.272211 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.284722 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.298118 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.308167 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.321905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.330952 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.349031 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.366641 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:50 crc kubenswrapper[4766]: I0130 16:22:50.383456 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.000417 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:53:10.037984439 +0000 UTC Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.028771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.032288 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038380 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.038605 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.038535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.038699 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.039975 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.050852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.066972 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.080098 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.091119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.105088 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.115407 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.126752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.141205 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.151849 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.167489 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.180483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.194763 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.204938 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf" exitCode=0 Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.205023 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.209109 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.210394 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.222164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.234436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.244488 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.257802 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.268694 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.283523 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.296016 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.308928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.319773 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.337609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.349964 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.361331 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.372319 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.381466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.401665 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.421619 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.434713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.446620 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.458165 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.469718 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.480361 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.491423 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.502244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.513769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.518845 4766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.520871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.521008 4766 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.524996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.527057 4766 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.527322 4766 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.528303 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.536785 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.539567 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542726 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.542751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.546376 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.554210 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.557969 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.568891 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.572467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.583952 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.587121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.598403 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:51 crc kubenswrapper[4766]: E0130 16:22:51.598524 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.599997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.600060 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.627976 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.701965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.702046 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.804501 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906418 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:51 crc kubenswrapper[4766]: I0130 16:22:51.906808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:51Z","lastTransitionTime":"2026-01-30T16:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.000986 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:08:31.044888516 +0000 UTC Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009133 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009193 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.009232 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.039261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.039381 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.111825 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213727 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.213766 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.216082 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.216117 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.221734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.238492 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.268009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.269710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.278609 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.293251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.305464 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.317428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.320400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.330927 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.343874 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.353769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.375284 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.389211 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.400541 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.412734 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419925 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.419936 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.424023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.435968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.475070 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.476659 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.487589 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.499714 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.510616 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.521971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.529428 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.540816 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.552942 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.563287 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.574887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.588529 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.601080 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624064 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.624272 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.665976 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.714715 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.726685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.812526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.812733 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.812702792 +0000 UTC m=+35.450660138 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.829217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913850 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.913906 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913920 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.913901199 +0000 UTC m=+35.551858555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.913996 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914041 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.914027292 +0000 UTC m=+35.551984658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914050 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914098 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914050 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914143 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914167 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914271 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.914248238 +0000 UTC m=+35.552205624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914112 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: E0130 16:22:52.914330 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.91431943 +0000 UTC m=+35.552276856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:52 crc kubenswrapper[4766]: I0130 16:22:52.931843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:52Z","lastTransitionTime":"2026-01-30T16:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.001625 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:37:15.370350632 +0000 UTC Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.033849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.034338 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:53 crc kubenswrapper[4766]: E0130 16:22:53.039380 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:53 crc kubenswrapper[4766]: E0130 16:22:53.039600 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.156962 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.224718 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.252645 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.259190 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.267814 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.282615 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.296314 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.307271 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.326770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.347391 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.361251 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.362632 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.378907 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.393532 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.405288 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.417771 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.430449 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.444557 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.463666 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.566730 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.669287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.771620 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.873889 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978516 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:53 crc kubenswrapper[4766]: I0130 16:22:53.978648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:53Z","lastTransitionTime":"2026-01-30T16:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.002383 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:36:59.794463511 +0000 UTC Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.038580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:54 crc kubenswrapper[4766]: E0130 16:22:54.038775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.080861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.081670 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.184592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229390 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b" exitCode=0 Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.229538 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.243313 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.267151 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.285696 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.287986 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.298705 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.310408 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.321703 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.333043 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.345391 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.358572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.369657 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.385364 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.390284 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.396937 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.411488 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.424669 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.494800 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.596996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.597071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.700353 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.802153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.904740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.905281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:54Z","lastTransitionTime":"2026-01-30T16:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:54 crc kubenswrapper[4766]: I0130 16:22:54.984914 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.003576 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:06:37.253372828 +0000 UTC Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.007900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.039201 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:55 crc kubenswrapper[4766]: E0130 16:22:55.039329 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.039215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:55 crc kubenswrapper[4766]: E0130 16:22:55.039643 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.109917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.110525 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214205 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.214281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.238053 4766 generic.go:334] "Generic (PLEG): container finished" podID="8da0c398-554f-47ad-aada-70e4b5c9ec98" containerID="73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04" exitCode=0 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.238102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerDied","Data":"73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.253090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.264736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.278226 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.290644 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.301383 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.316508 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.323731 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.376973 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.400603 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418367 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.418993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.419023 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.434563 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.449921 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.463976 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.479072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.493770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.522775 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625431 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.625464 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728222 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.728239 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.784060 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.819735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.829979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.830033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:55 crc kubenswrapper[4766]: I0130 16:22:55.932639 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:55Z","lastTransitionTime":"2026-01-30T16:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.003727 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:12:44.507042955 +0000 UTC Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.034941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.035040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.035061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.039272 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:56 crc kubenswrapper[4766]: E0130 16:22:56.039404 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.059804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.076128 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.088619 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.108988 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.124711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.136621 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137158 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137404 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.137472 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.151752 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.162726 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.173754 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.188223 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.208628 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.219867 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.231482 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.240356 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.250826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.253562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" event={"ID":"8da0c398-554f-47ad-aada-70e4b5c9ec98","Type":"ContainerStarted","Data":"b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.255249 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/0.log" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.257541 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" exitCode=1 Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.257584 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.258301 4766 scope.go:117] "RemoveContainer" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.268631 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.286031 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.307881 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.321252 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.332772 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342839 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.342881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.343801 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.353164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.367301 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.381766 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.394262 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.407129 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.424333 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.438001 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445549 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.445564 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.453015 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.469491 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.481276 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.491599 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.503564 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.532073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.547646 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.548386 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.569103 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.582751 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.591063 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.604879 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.618010 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.636587 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.660577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.665327 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.681543 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.762583 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.865562 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:56 crc kubenswrapper[4766]: I0130 16:22:56.967991 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:56Z","lastTransitionTime":"2026-01-30T16:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.004392 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:31:54.589321331 +0000 UTC Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.039350 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.039366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.039486 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.039590 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.070347 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.173390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.263207 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.264038 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/0.log" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.267999 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" exitCode=1 Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.268031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.268073 4766 scope.go:117] "RemoveContainer" containerID="1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.269892 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:22:57 crc kubenswrapper[4766]: E0130 16:22:57.270222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.275439 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.287921 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.306999 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.320490 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.340674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.357062 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.372653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.377966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.378056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.384788 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.403782 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.419228 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.430344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.440993 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.462374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.481871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.485214 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.499281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.570041 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.584509 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.591839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.609065 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.622416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.633082 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.641330 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.680581 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.719990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.720088 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.722374 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732002 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf"] Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732256 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.732467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.746555 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.747977 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.765632 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.776289 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.787460 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.796918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.797042 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.797497 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.805416 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.818153 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822248 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.822291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.829372 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.838453 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.851845 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.861841 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.877826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.890089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898820 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.898968 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.899003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.899480 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5feae404-d53f-4bf5-af27-07a7ce350594-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.904397 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.905038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5feae404-d53f-4bf5-af27-07a7ce350594-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.918920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc5l\" (UniqueName: \"kubernetes.io/projected/5feae404-d53f-4bf5-af27-07a7ce350594-kube-api-access-7rc5l\") pod \"ovnkube-control-plane-749d76644c-rg9cf\" (UID: \"5feae404-d53f-4bf5-af27-07a7ce350594\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.920616 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.925280 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:57Z","lastTransitionTime":"2026-01-30T16:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.934229 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.947246 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.959569 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.972251 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.982937 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:57 crc kubenswrapper[4766]: I0130 16:22:57.996822 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.005162 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:12:03.659352798 +0000 UTC Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.008765 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.027428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.038982 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:22:58 crc kubenswrapper[4766]: E0130 16:22:58.039242 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.056978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" Jan 30 16:22:58 crc kubenswrapper[4766]: W0130 16:22:58.071666 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5feae404_d53f_4bf5_af27_07a7ce350594.slice/crio-cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56 WatchSource:0}: Error finding container cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56: Status 404 returned error can't find the container with id cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56 Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.131679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.234557 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.272148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"cbaa1eb895ea8895c467293aa41dda73f8914e14ffda4aecade43866cbf14f56"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.274041 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337101 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.337140 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.439997 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.542746 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.644847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.747715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.851841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.852664 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:58 crc kubenswrapper[4766]: I0130 16:22:58.955994 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:58Z","lastTransitionTime":"2026-01-30T16:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.005752 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:23:06.580802856 +0000 UTC Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.039431 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.039510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.039561 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.039752 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.058760 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.160995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.161082 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.263684 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.280547 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.280790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" event={"ID":"5feae404-d53f-4bf5-af27-07a7ce350594","Type":"ContainerStarted","Data":"0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.294117 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.310090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.325298 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.338274 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.348030 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.360240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.366489 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.375164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.393160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.406523 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.417499 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.428166 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.439058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.452058 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.464889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.468895 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.476369 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.548993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.549497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.549566 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.564533 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.571515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.595311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.616043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.616085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.619963 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.635373 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.646804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.661476 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.673772 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.676089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.687392 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.699844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.710580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.716992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.717061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.717130 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:22:59 crc kubenswrapper[4766]: E0130 16:22:59.717238 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:00.217218925 +0000 UTC m=+34.855176351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.721652 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.731673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9nh\" (UniqueName: \"kubernetes.io/projected/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-kube-api-access-mp9nh\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.738895 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.752389 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.763257 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.776641 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.778049 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.789754 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:22:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.878898 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:22:59 crc kubenswrapper[4766]: I0130 16:22:59.982222 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:22:59Z","lastTransitionTime":"2026-01-30T16:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.006483 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:07:03.003210926 +0000 UTC Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.039119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.039249 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085032 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.085995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.086206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.086300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.188705 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.222397 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.222519 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.222586 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:01.222567093 +0000 UTC m=+35.860524439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.291301 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.394744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.395338 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.498577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601479 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.601488 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.703977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.704056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806308 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806346 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.806358 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.830973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.831155 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.831134833 +0000 UTC m=+51.469092189 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.909147 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:00Z","lastTransitionTime":"2026-01-30T16:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.931919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.931990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.932037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:00 crc kubenswrapper[4766]: I0130 16:23:00.932069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932128 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932153 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932167 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932228 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932240 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932226717 +0000 UTC m=+51.570184063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932358 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.93234185 +0000 UTC m=+51.570299196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932235 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932384 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932379001 +0000 UTC m=+51.570336347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932247 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932496 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932528 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:00 crc kubenswrapper[4766]: E0130 16:23:00.932633 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:16.932600697 +0000 UTC m=+51.570558083 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.006653 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:55:20.161190173 +0000 UTC Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.012099 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039100 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039097 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039284 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039376 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.039446 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.115428 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.218699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.235729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.235990 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.236079 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:03.236058164 +0000 UTC m=+37.874015510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.322224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.423986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.424074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.526816 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629768 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.629779 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.732712 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.752566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.763489 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.772227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.782578 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.786408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.797667 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801525 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.801537 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.812574 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.816255 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.829880 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:01 crc kubenswrapper[4766]: E0130 16:23:01.830018 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.835682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:01 crc kubenswrapper[4766]: I0130 16:23:01.938388 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:01Z","lastTransitionTime":"2026-01-30T16:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.007159 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:18:25.255086748 +0000 UTC Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.038584 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:02 crc kubenswrapper[4766]: E0130 16:23:02.038759 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.040399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.142424 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.244928 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.346990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.347005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.347014 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.449273 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.551650 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.653812 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.756764 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.858691 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:02 crc kubenswrapper[4766]: I0130 16:23:02.961788 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:02Z","lastTransitionTime":"2026-01-30T16:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.007739 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:33:13.33571322 +0000 UTC Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.039557 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039620 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039761 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.039805 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.064233 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.165989 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.166058 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.255019 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.255232 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:03 crc kubenswrapper[4766]: E0130 16:23:03.255329 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:07.255309326 +0000 UTC m=+41.893266752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268456 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.268490 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.370783 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473518 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.473552 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.576384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.679226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.782903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.782959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.783033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.885975 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:03 crc kubenswrapper[4766]: I0130 16:23:03.988853 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:03Z","lastTransitionTime":"2026-01-30T16:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.008748 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:26:35.286594272 +0000 UTC Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.039330 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:04 crc kubenswrapper[4766]: E0130 16:23:04.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.091556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.194300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296253 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.296265 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399430 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.399511 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.501625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.604135 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709878 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.709914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.812682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:04 crc kubenswrapper[4766]: I0130 16:23:04.914903 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:04Z","lastTransitionTime":"2026-01-30T16:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.009373 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:52:34.458294173 +0000 UTC Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.017375 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038337 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038360 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.038419 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038569 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:05 crc kubenswrapper[4766]: E0130 16:23:05.038738 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.119881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.221998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.222096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.324888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.427277 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.530468 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.633696 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.736896 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.839425 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:05 crc kubenswrapper[4766]: I0130 16:23:05.942294 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:05Z","lastTransitionTime":"2026-01-30T16:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.010045 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:17:05.363681721 +0000 UTC Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.039366 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:06 crc kubenswrapper[4766]: E0130 16:23:06.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.045302 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.052804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.066350 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.077335 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.089844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.102336 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.115237 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.127472 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.140155 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.147952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.149884 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.159864 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.169578 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.179650 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.188259 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.205293 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3dfb77226b722802de4a6648846be67d88834db86128acbeb613ecd26cc46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549095 5990 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:22:55.549425 5990 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:22:55.549491 5990 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:22:55.549512 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:22:55.549518 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:22:55.549540 5990 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 16:22:55.549558 5990 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16:22:55.549598 5990 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:22:55.549617 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 16:22:55.549640 5990 factory.go:656] Stopping watch factory\\\\nI0130 16:22:55.549659 5990 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:22:55.549706 5990 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:22:55.549717 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:22:55.549726 5990 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 16:22:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.218674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.229911 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249279 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.249310 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352149 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.352198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.455269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.604342 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.709426 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.814850 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:06 crc kubenswrapper[4766]: I0130 16:23:06.917259 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:06Z","lastTransitionTime":"2026-01-30T16:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.011233 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:29:55.571280892 +0000 UTC Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020790 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.020813 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.038909 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.038938 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.038996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.039089 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.123145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.225152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.299698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.299871 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:07 crc kubenswrapper[4766]: E0130 16:23:07.299959 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:15.299938887 +0000 UTC m=+49.937896243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.327612 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430560 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430648 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.430699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533794 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.533823 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.636960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.738979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.739123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.739307 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.841982 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945467 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:07 crc kubenswrapper[4766]: I0130 16:23:07.945484 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:07Z","lastTransitionTime":"2026-01-30T16:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.012353 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:42:02.032628955 +0000 UTC Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.039147 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:08 crc kubenswrapper[4766]: E0130 16:23:08.039393 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.048729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.151080 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.253720 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356090 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356401 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.356572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459480 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.459510 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562412 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.562455 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.665744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.768854 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.769471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.871646 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:08 crc kubenswrapper[4766]: I0130 16:23:08.974989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:08Z","lastTransitionTime":"2026-01-30T16:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.013078 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:56:13.873077564 +0000 UTC Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039369 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038962 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.038947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039702 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:09 crc kubenswrapper[4766]: E0130 16:23:09.039888 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077651 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.077679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180357 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180532 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.180600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.283848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.284111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.284371 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.386821 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.489529 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.593217 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.695964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696572 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.696655 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.815285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917273 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:09 crc kubenswrapper[4766]: I0130 16:23:09.917291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:09Z","lastTransitionTime":"2026-01-30T16:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.014150 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:14:23.895250366 +0000 UTC Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.019970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.020086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.038841 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:10 crc kubenswrapper[4766]: E0130 16:23:10.039000 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.121976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.122059 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.224111 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.326863 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.429227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.532951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.532991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.533034 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.636342 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.738503 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.840993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.841084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943706 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:10 crc kubenswrapper[4766]: I0130 16:23:10.943747 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:10Z","lastTransitionTime":"2026-01-30T16:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.014731 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:34:07.178164548 +0000 UTC Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.038450 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038508 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:11 crc kubenswrapper[4766]: E0130 16:23:11.038926 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.045859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.046700 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.150685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254232 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.254276 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.356945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.357078 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.460549 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.564400 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.667448 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.770263 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.872948 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:11 crc kubenswrapper[4766]: I0130 16:23:11.975470 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:11Z","lastTransitionTime":"2026-01-30T16:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.015916 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:09:14.083694882 +0000 UTC Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.039406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.039539 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.040754 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.054534 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.066568 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.078836 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.089957 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.102834 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.115507 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.131639 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.141729 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.153308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.163408 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.174846 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180325 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.180337 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.192443 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.202919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.206736 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.214819 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218454 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218628 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.218638 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.230435 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.230449 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.233571 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.239309 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.244065 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.247282 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.256732 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259916 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.259946 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.270033 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: E0130 16:23:12.270146 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282748 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.282782 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.319050 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.320944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.321667 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.332231 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.342297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.352451 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.363708 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.375913 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.385193 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.389199 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.399267 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.412486 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.429728 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.447002 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.467339 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.487071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.497516 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.515158 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.529160 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.554025 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.567996 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.589636 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692301 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.692316 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.794487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:12 crc kubenswrapper[4766]: I0130 16:23:12.897683 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:12Z","lastTransitionTime":"2026-01-30T16:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000190 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.000200 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.016829 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:14:22.003570946 +0000 UTC Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039306 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.039353 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039407 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039512 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.039592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.102522 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.205420 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.307672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.326356 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.326940 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/1.log" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329568 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" exitCode=1 Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329610 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.329687 4766 scope.go:117] "RemoveContainer" containerID="de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.330211 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:13 crc kubenswrapper[4766]: E0130 16:23:13.330382 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.349528 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.361842 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.373072 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.384799 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.398859 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410440 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.410515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.412073 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.422836 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.432778 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.443637 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.453646 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.466210 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.484399 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de97db5af666a3aa1c19f12bc73f1b2f74553cffd269480101a7c1ac325e11de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"message\\\":\\\" 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0130 16:22:57.121614 6200 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:22:57.121767 6200 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.497479 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.509860 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.512725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.521600 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.534740 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615071 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615094 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.615103 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717289 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.717298 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.820346 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922337 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:13 crc kubenswrapper[4766]: I0130 16:23:13.922422 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:13Z","lastTransitionTime":"2026-01-30T16:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.017146 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:44:06.921974672 +0000 UTC Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.024991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.025062 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.038915 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:14 crc kubenswrapper[4766]: E0130 16:23:14.039066 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.127159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.229586 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331620 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.331698 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.334443 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.339726 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:14 crc kubenswrapper[4766]: E0130 16:23:14.339896 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.351968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.362028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.372526 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.383954 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.395244 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.437942 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.440625 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.454624 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.468168 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.479400 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.490905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.500501 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.511471 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.520198 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.535829 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.539994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.540006 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.552344 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.566135 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.642733 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.744993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.745600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.848107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:14 crc kubenswrapper[4766]: I0130 16:23:14.951139 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:14Z","lastTransitionTime":"2026-01-30T16:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.017537 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:24:59.152485004 +0000 UTC Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039230 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039313 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.039358 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.039570 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.053223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.155970 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.258135 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.361516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.381536 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.381723 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:15 crc kubenswrapper[4766]: E0130 16:23:15.381830 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:31.381805976 +0000 UTC m=+66.019763352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465005 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.465090 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567477 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567555 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567568 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.567598 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.670314 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773506 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773564 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.773607 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.877753 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:15 crc kubenswrapper[4766]: I0130 16:23:15.981735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:15Z","lastTransitionTime":"2026-01-30T16:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.018512 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:51:01.386038027 +0000 UTC Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.039015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:16 crc kubenswrapper[4766]: E0130 16:23:16.039215 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.056898 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.073054 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.083995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084461 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.084869 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.095342 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.109707 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.124158 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.135552 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.152583 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.169786 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186145 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186749 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.186785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.200436 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.211028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.223966 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.236601 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.249506 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.260573 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.288949 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401546 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401583 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401607 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.401620 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.503870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.504049 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606661 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.606685 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.708567 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.810876 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.899415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:16 crc kubenswrapper[4766]: E0130 16:23:16.899690 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:23:48.899652069 +0000 UTC m=+83.537609475 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:16 crc kubenswrapper[4766]: I0130 16:23:16.913395 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:16Z","lastTransitionTime":"2026-01-30T16:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.000994 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.001023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001033 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001131 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001106033 +0000 UTC m=+83.639063419 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001145 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001158 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001166 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001221 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001220 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001247 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001262 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001233 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001221626 +0000 UTC m=+83.639179032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001293 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001276927 +0000 UTC m=+83.639234333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.001310 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:23:49.001302178 +0000 UTC m=+83.639259644 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.015989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.019066 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:16:48.033275362 +0000 UTC Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038493 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:17 crc kubenswrapper[4766]: E0130 16:23:17.038737 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118902 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.118915 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.221225 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.324504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427269 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427334 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427375 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.427393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.530906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633278 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633295 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.633308 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.666058 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.674145 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.677920 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.687134 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.703896 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.719090 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.732654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.735500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.747393 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.766973 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.784247 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.801821 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.816795 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.828633 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.838689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.841126 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.854024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.862929 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.874513 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.885370 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.942950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.942987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:17 crc kubenswrapper[4766]: I0130 16:23:17.943036 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:17Z","lastTransitionTime":"2026-01-30T16:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.020248 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:30:06.63252422 +0000 UTC Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.038751 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:18 crc kubenswrapper[4766]: E0130 16:23:18.038940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.046746 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.149335 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251880 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251943 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.251971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355464 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.355577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.458663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.561744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.664329 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.766755 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.868965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.868999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.869033 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.970962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:18 crc kubenswrapper[4766]: I0130 16:23:18.971051 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:18Z","lastTransitionTime":"2026-01-30T16:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.021130 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:16:51.329029657 +0000 UTC Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038540 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.038413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:19 crc kubenswrapper[4766]: E0130 16:23:19.038712 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.073608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.177480 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.281320 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.384424 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487204 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.487218 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.589953 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.692268 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.795394 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.897976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:19 crc kubenswrapper[4766]: I0130 16:23:19.898074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:19Z","lastTransitionTime":"2026-01-30T16:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.000672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.021889 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:05:54.859965173 +0000 UTC Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.038708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:20 crc kubenswrapper[4766]: E0130 16:23:20.038863 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.103229 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205600 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.205631 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.309070 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411479 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411499 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.411506 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514194 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.514203 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616866 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.616882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.719270 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.822430 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:20 crc kubenswrapper[4766]: I0130 16:23:20.924843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:20Z","lastTransitionTime":"2026-01-30T16:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.022396 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:50:24.88891897 +0000 UTC Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.027900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.027988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.028057 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038646 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038727 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.038656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.038829 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.038938 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:21 crc kubenswrapper[4766]: E0130 16:23:21.039247 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131316 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.131407 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.234603 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336814 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.336846 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.438962 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.541999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.542068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.644995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.645005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748397 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.748408 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851195 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.851227 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953365 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953445 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:21 crc kubenswrapper[4766]: I0130 16:23:21.953470 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:21Z","lastTransitionTime":"2026-01-30T16:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.023290 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:18:24.730049379 +0000 UTC Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.038609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.038721 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055752 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.055832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.158719 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260892 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260917 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.260934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363608 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.363618 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436089 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.436121 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.448521 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452889 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.452902 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.468479 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.473241 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.493677 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.499085 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.515605 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.520293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.535971 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:22Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:22 crc kubenswrapper[4766]: E0130 16:23:22.536135 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.538634 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.642378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.744194 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.846736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949557 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:22 crc kubenswrapper[4766]: I0130 16:23:22.949635 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:22Z","lastTransitionTime":"2026-01-30T16:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.023781 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:57:21.078551278 +0000 UTC Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039354 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039480 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.039643 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:23 crc kubenswrapper[4766]: E0130 16:23:23.039768 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.051600 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.154944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.155068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.257622 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.360084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.462476 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565287 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565314 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.565328 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.668136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.770704 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872573 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872584 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872599 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.872608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:23 crc kubenswrapper[4766]: I0130 16:23:23.975891 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:23Z","lastTransitionTime":"2026-01-30T16:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.024644 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:59:19.402867346 +0000 UTC Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.039070 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:24 crc kubenswrapper[4766]: E0130 16:23:24.039230 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.077983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.078045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.180978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.181003 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.181017 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.283564 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386666 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.386699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.489205 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.592105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695093 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.695122 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.797972 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:24 crc kubenswrapper[4766]: I0130 16:23:24.900535 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:24Z","lastTransitionTime":"2026-01-30T16:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002414 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.002493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.025777 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:10:17.296184681 +0000 UTC Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039138 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.039243 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039322 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039416 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:25 crc kubenswrapper[4766]: E0130 16:23:25.039527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105297 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.105396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.207997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208054 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208067 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.208096 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.310224 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413246 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.413317 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.515815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617848 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617872 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.617881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.719914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.821908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822007 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822023 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822043 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.822056 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:25 crc kubenswrapper[4766]: I0130 16:23:25.924872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:25Z","lastTransitionTime":"2026-01-30T16:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027731 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.027800 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.026065 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:10:59.742007517 +0000 UTC Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.038582 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:26 crc kubenswrapper[4766]: E0130 16:23:26.038730 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.040280 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:26 crc kubenswrapper[4766]: E0130 16:23:26.040498 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.052968 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.066353 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.076380 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.093359 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.106711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129112 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.129724 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.142033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.153565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.164586 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.177104 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.189028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.202063 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.214420 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.228769 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232533 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.232541 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.241538 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.255592 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.269947 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334786 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.334796 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.437534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.539996 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.643689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.746811 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.849900 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953081 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:26 crc kubenswrapper[4766]: I0130 16:23:26.953152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:26Z","lastTransitionTime":"2026-01-30T16:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.029958 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:35:47.494178092 +0000 UTC Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039364 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039383 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.039387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039667 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:27 crc kubenswrapper[4766]: E0130 16:23:27.039523 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.056457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159257 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.159269 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261601 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261617 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.261654 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364162 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.364285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.466977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.467040 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.569560 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.671672 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774283 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.774305 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878187 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.878275 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981051 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981104 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:27 crc kubenswrapper[4766]: I0130 16:23:27.981112 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:27Z","lastTransitionTime":"2026-01-30T16:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.030438 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:38:36.171482427 +0000 UTC Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.039035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:28 crc kubenswrapper[4766]: E0130 16:23:28.039272 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083867 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083946 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.083957 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.186999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.187076 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.289843 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.391725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493614 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.493694 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595650 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.595699 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.699138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.800983 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:28 crc kubenswrapper[4766]: I0130 16:23:28.903106 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:28Z","lastTransitionTime":"2026-01-30T16:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005556 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.005565 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.031396 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:09:04.560009758 +0000 UTC Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038791 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039016 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.038963 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:29 crc kubenswrapper[4766]: E0130 16:23:29.039158 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107527 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107540 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.107550 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211048 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.211133 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313904 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.313914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416687 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.416731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.519415 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621355 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.621365 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.725281 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827112 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.827142 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.928962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929030 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929047 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:29 crc kubenswrapper[4766]: I0130 16:23:29.929059 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:29Z","lastTransitionTime":"2026-01-30T16:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031506 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:31:39.291684466 +0000 UTC Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031842 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.031903 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.039455 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:30 crc kubenswrapper[4766]: E0130 16:23:30.039581 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134833 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.134842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.237752 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340320 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.340361 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.442261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.544784 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647234 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.647266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.749658 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.852109 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:30 crc kubenswrapper[4766]: I0130 16:23:30.954546 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:30Z","lastTransitionTime":"2026-01-30T16:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.032277 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:18:47.58765427 +0000 UTC Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038773 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.038798 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038892 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.038963 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056736 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.056769 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158884 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.158971 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.261151 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.363556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.447127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.447386 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:31 crc kubenswrapper[4766]: E0130 16:23:31.447491 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:03.447466783 +0000 UTC m=+98.085424189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468170 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.468196 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570147 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.570160 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.672592 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.774754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877462 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.877494 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:31 crc kubenswrapper[4766]: I0130 16:23:31.979563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:31Z","lastTransitionTime":"2026-01-30T16:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.033136 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:15:48.899637259 +0000 UTC Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.038539 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.038648 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.081991 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.184720 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.287882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.288093 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.390506 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493115 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493156 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493200 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.493212 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597025 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.597140 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.699692 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801717 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.801748 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899095 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.899125 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.910949 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914709 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914776 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.914812 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.927070 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.930838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.943049 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.946406 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.957863 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961108 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961168 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.961194 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.973208 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:32 crc kubenswrapper[4766]: E0130 16:23:32.973319 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974623 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:32 crc kubenswrapper[4766]: I0130 16:23:32.974702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:32Z","lastTransitionTime":"2026-01-30T16:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.033338 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:20:47.277812912 +0000 UTC Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038621 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.038746 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.038782 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.038844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:33 crc kubenswrapper[4766]: E0130 16:23:33.039139 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076505 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.076515 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.178952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281169 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.281223 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.383824 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.398866 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399140 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" exitCode=1 Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.399674 4766 scope.go:117] "RemoveContainer" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.412656 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.423212 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.436969 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.447299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.457844 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.469076 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.478291 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486192 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.486260 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.501086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.515521 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.526403 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.537420 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.546822 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.588835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.690967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.691046 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.726780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.750024 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.763685 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.778434 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.789119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793578 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.793604 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896411 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.896421 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:33 crc kubenswrapper[4766]: I0130 16:23:33.998754 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:33Z","lastTransitionTime":"2026-01-30T16:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.034231 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:57:11.303044139 +0000 UTC Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.038687 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:34 crc kubenswrapper[4766]: E0130 16:23:34.038788 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101417 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.101479 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204350 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204395 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204424 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.204436 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305896 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.305923 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.403227 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.403281 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.410994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.411005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.417678 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.428766 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.437905 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.455100 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.468051 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.480878 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.493047 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.504210 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512909 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512927 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.512938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.517011 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.530208 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.541456 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.551312 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.561654 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.572812 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.581466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.590946 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.599437 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615239 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615274 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.615285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717744 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.717773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.819936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.819992 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.820024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:34 crc kubenswrapper[4766]: I0130 16:23:34.923458 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:34Z","lastTransitionTime":"2026-01-30T16:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.025802 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.034908 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:04:04.526001435 +0000 UTC Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039157 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039188 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.039239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039290 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039392 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:35 crc kubenswrapper[4766]: E0130 16:23:35.039447 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128307 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.128386 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.230998 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332959 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.332993 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435821 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.435922 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538367 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.538404 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.641419 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743590 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.743709 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.846234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:35 crc kubenswrapper[4766]: I0130 16:23:35.950155 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:35Z","lastTransitionTime":"2026-01-30T16:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.035706 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:28:21.287282315 +0000 UTC Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.039355 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:36 crc kubenswrapper[4766]: E0130 16:23:36.039517 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.052868 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.052962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.053153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.064467 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.077107 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.088806 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.101427 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.113086 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.133113 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.144662 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157547 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157562 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.157572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.160550 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.174516 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.184275 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.196195 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.205891 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.225653 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.238814 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.251202 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.260254 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.262804 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.361968 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.464683 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.566994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.567069 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669444 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.669469 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.772679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.874999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.875012 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977830 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:36 crc kubenswrapper[4766]: I0130 16:23:36.977863 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:36Z","lastTransitionTime":"2026-01-30T16:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.036603 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:40:54.164602687 +0000 UTC Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038948 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038958 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.038961 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039214 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039063 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:37 crc kubenswrapper[4766]: E0130 16:23:37.039350 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079680 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.079758 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182663 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.182702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.285508 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388087 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.388152 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.490729 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.597841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598939 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.598976 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.701309 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803449 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.803527 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:37 crc kubenswrapper[4766]: I0130 16:23:37.907799 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:37Z","lastTransitionTime":"2026-01-30T16:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.009845 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.037345 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:36:45.307604891 +0000 UTC Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.038689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:38 crc kubenswrapper[4766]: E0130 16:23:38.038834 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112380 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.112413 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.215751 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321470 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.321556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424189 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.424228 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.526678 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628673 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.628697 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730964 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.730981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.731001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.731013 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.832993 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.833008 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.833017 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935077 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935100 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:38 crc kubenswrapper[4766]: I0130 16:23:38.935109 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:38Z","lastTransitionTime":"2026-01-30T16:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037598 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:49:21.32062105 +0000 UTC Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037711 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.037724 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.038964 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.039005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039079 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.038971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039211 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:39 crc kubenswrapper[4766]: E0130 16:23:39.039344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140491 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140541 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140565 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.140574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242751 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242809 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.242837 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345135 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.345245 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448302 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448328 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.448345 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550770 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.550791 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.652960 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.652991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653004 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653020 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.653032 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755267 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.755339 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857692 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857716 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.857725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959953 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959965 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:39 crc kubenswrapper[4766]: I0130 16:23:39.959995 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:39Z","lastTransitionTime":"2026-01-30T16:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.038518 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:38:08.839797951 +0000 UTC Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.038531 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:40 crc kubenswrapper[4766]: E0130 16:23:40.038678 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062262 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062345 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.062362 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165694 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.165702 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267963 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.267998 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370763 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.370785 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.473206 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575816 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.575873 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677656 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677710 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.677737 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780321 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780376 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.780410 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883817 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.883829 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.985955 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:40 crc kubenswrapper[4766]: I0130 16:23:40.986072 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:40Z","lastTransitionTime":"2026-01-30T16:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.038994 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039004 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039059 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039051 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:15:19.038612132 +0000 UTC Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039444 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.039819 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:41 crc kubenswrapper[4766]: E0130 16:23:41.039878 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.088972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.089031 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190924 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.190952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.292981 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293016 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293026 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.293048 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.394695 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.422856 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.425288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.426353 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.436826 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.446580 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.458901 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.467874 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.481398 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496906 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496915 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.496937 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.497085 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.511299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.539701 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.554095 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.572819 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.588050 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599151 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599228 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599240 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.599337 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.610297 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.627997 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.639548 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.650790 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.661070 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.701795 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804196 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804266 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.804278 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906827 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:41 crc kubenswrapper[4766]: I0130 16:23:41.906879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:41Z","lastTransitionTime":"2026-01-30T16:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.009293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.038932 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.039144 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:38:23.858525015 +0000 UTC Jan 30 16:23:42 crc kubenswrapper[4766]: E0130 16:23:42.039398 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.052347 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.111495 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214856 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.214938 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318221 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.318402 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.422523 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.430898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.432249 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/2.log" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.436743 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" exitCode=1 Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.436826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.437051 4766 scope.go:117] "RemoveContainer" containerID="7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.438045 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:42 crc kubenswrapper[4766]: E0130 16:23:42.438384 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.457425 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.472570 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.489290 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.505839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.521957 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524728 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524811 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.524855 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.538466 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.551882 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.565779 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.578138 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.593299 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.604810 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.615119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627670 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.627749 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.633001 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a74b51a75799769b1eccf7cde8dbf771dcd168728257bb835b5e68ab920c2e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:12Z\\\",\\\"message\\\":\\\"or_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944427 6408 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:23:12.944213 6408 services_controller.go:445] Built service openshift-multus/multus-admission-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 16:23:12.944465 6408 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" but failed to find it\\\\nI0130 16:23:12.944472 6408 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\\\\\" in cache\\\\nF0130 16:23:12.944413 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.649075 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.664138 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.679047 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.692364 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.706068 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:42Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730259 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.730363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837028 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837109 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.837153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940863 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:42 crc kubenswrapper[4766]: I0130 16:23:42.940879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:42Z","lastTransitionTime":"2026-01-30T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039264 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.039293 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:18:29.96848122 +0000 UTC Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039465 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.039518 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043632 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043688 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043743 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.043770 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089558 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.089609 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.113461 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118457 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118659 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.118990 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.133901 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138596 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.138723 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.151934 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156469 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.156495 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.172983 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.177888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.192473 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.192987 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194972 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194984 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.194999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.195014 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297389 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297429 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.297466 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399861 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.399911 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.442619 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.445994 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:43 crc kubenswrapper[4766]: E0130 16:23:43.446129 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.462304 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.476858 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.491925 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.504652 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.507839 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.520144 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.536924 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.553323 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.562952 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.587346 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606662 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606708 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606721 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.606731 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.616615 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.630775 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.640243 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.650161 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.659980 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.668483 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.686028 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.705713 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709409 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709455 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.709493 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.719477 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.812671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.813442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:43 crc kubenswrapper[4766]: I0130 16:23:43.915803 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:43Z","lastTransitionTime":"2026-01-30T16:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018069 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018160 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.018237 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.038857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:44 crc kubenswrapper[4766]: E0130 16:23:44.039021 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.039834 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:51:31.042616073 +0000 UTC Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.120956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121261 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121359 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121485 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.121570 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224353 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.224438 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326543 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.326572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.429969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430050 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.430065 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.533092 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635871 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.635935 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.738948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.738999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739058 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.739071 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841045 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.841084 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943522 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:44 crc kubenswrapper[4766]: I0130 16:23:44.943538 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:44Z","lastTransitionTime":"2026-01-30T16:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039495 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.039502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039621 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039705 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:45 crc kubenswrapper[4766]: E0130 16:23:45.039811 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.040318 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:10:47.910878693 +0000 UTC Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.045974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.046053 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148330 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.148373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250598 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250654 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.250666 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.353977 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354329 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.354385 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457571 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457587 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.457628 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.560463 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.663249 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765922 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.765955 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.868889 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971638 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:45 crc kubenswrapper[4766]: I0130 16:23:45.971871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:45Z","lastTransitionTime":"2026-01-30T16:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:46 crc kubenswrapper[4766]: E0130 16:23:46.038525 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.040534 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:56:51.111839811 +0000 UTC Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.058169 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.070943 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074860 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.074954 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.088897 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.101657 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.112033 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.128023 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.144111 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.157639 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.168674 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177529 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177544 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.177553 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.180994 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.190605 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.200363 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.212164 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.222548 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.244045 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.261281 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.275296 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280167 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.280259 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.287935 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382808 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.382819 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484950 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.484991 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.485007 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587720 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.587780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.690851 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794452 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794528 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.794612 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897701 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:46 crc kubenswrapper[4766]: I0130 16:23:46.897715 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:46Z","lastTransitionTime":"2026-01-30T16:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000270 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000340 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000356 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.000396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039443 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.039512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039622 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039724 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:47 crc kubenswrapper[4766]: E0130 16:23:47.039876 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.041423 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:27:27.017518218 +0000 UTC Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.103860 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206114 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206157 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.206171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309416 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309425 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309439 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.309457 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413515 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413563 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.413621 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516510 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.516551 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619738 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.619967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.620054 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.722770 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825197 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825215 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.825226 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927936 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927949 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:47 crc kubenswrapper[4766]: I0130 16:23:47.927957 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:47Z","lastTransitionTime":"2026-01-30T16:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030840 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030877 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030886 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.030945 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.038342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:48 crc kubenswrapper[4766]: E0130 16:23:48.038473 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.041588 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:16:26.864939336 +0000 UTC Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.134300 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.236838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.339247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441513 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441577 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.441589 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544290 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.544324 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647258 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647268 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.647291 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749702 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749741 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.749755 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.851980 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852039 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.852067 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.931128 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:23:48 crc kubenswrapper[4766]: E0130 16:23:48.931356 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.931322354 +0000 UTC m=+147.569279700 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954978 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.954994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.955011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:48 crc kubenswrapper[4766]: I0130 16:23:48.955024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:48Z","lastTransitionTime":"2026-01-30T16:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.032748 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032762 4766 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032834 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.032812461 +0000 UTC m=+147.670769807 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032897 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032919 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032933 4766 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032971 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.032959431 +0000 UTC m=+147.670916797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.032893 4766 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033011 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.033003753 +0000 UTC m=+147.670961119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033020 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033059 4766 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033072 4766 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.033144 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.033124183 +0000 UTC m=+147.671081539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.038699 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.038846 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.038923 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.039017 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.039372 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:49 crc kubenswrapper[4766]: E0130 16:23:49.039505 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.041982 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:48:54.945482776 +0000 UTC Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058588 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058665 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.058693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162887 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.162906 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266539 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266591 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.266649 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.369340 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475144 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.475158 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578669 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578756 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.578791 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681668 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681681 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.681736 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784865 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.784881 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887447 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887460 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.887471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990084 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:49 crc kubenswrapper[4766]: I0130 16:23:49.990105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:49Z","lastTransitionTime":"2026-01-30T16:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.038667 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:50 crc kubenswrapper[4766]: E0130 16:23:50.038810 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.042622 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:51:11.747058018 +0000 UTC Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.093658 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196465 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196475 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.196507 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299435 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299519 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.299556 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403793 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403849 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.403882 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.506574 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610585 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610641 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.610663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713304 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713393 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.713415 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.815534 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918603 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918616 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:50 crc kubenswrapper[4766]: I0130 16:23:50.918647 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:50Z","lastTransitionTime":"2026-01-30T16:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.020940 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.020999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021015 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.021042 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039319 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.039161 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039475 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:51 crc kubenswrapper[4766]: E0130 16:23:51.039519 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.043691 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:39:59.066129954 +0000 UTC Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.123925 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.226767 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.227110 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330251 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.330305 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.432086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535235 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.535260 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637912 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637923 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.637953 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.740987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.741001 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.741009 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843402 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843432 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.843443 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945933 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.945985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.946000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:51 crc kubenswrapper[4766]: I0130 16:23:51.946011 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:51Z","lastTransitionTime":"2026-01-30T16:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.038340 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:52 crc kubenswrapper[4766]: E0130 16:23:52.038507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.044078 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:23:14.579113988 +0000 UTC Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047898 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.047928 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150243 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150260 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150282 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.150298 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253009 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.253114 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.355633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.355935 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356323 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.356611 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460247 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460312 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460362 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.460374 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.567372 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.568166 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671421 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.671452 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775554 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775570 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.775616 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877907 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877973 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.877994 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980441 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980493 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:52 crc kubenswrapper[4766]: I0130 16:23:52.980504 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:52Z","lastTransitionTime":"2026-01-30T16:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.039413 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.039470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.039570 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.039675 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.040027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.040423 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.044673 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:05:32.413613619 +0000 UTC Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084593 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.084627 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187894 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.187968 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.290928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291757 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291874 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.291986 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.292077 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.397847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.398982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399096 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.399336 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439903 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.439983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.440006 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.440020 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.456091 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462468 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.462500 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.479151 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483642 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.483697 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.497552 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.501983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502061 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.502102 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.514983 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519244 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519305 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.519337 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.532720 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:53 crc kubenswrapper[4766]: E0130 16:23:53.532941 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534824 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.534926 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.638236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.741947 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742027 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742040 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.742080 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845461 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845581 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.845601 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948729 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948815 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:53 crc kubenswrapper[4766]: I0130 16:23:53.948868 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:53Z","lastTransitionTime":"2026-01-30T16:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.039308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:54 crc kubenswrapper[4766]: E0130 16:23:54.039536 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.044845 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:36:18.525127483 +0000 UTC Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051410 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051428 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051453 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.051471 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.154735 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.258566 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362242 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362272 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.362329 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465407 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465471 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.465516 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568788 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.568999 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.569068 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.671797 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672165 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672299 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.672384 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774373 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774433 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.774462 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877280 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877342 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877363 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.877404 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.979961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980333 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:54 crc kubenswrapper[4766]: I0130 16:23:54.980499 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:54Z","lastTransitionTime":"2026-01-30T16:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.038943 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.039062 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039117 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.039076 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039293 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.039345 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.040103 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:23:55 crc kubenswrapper[4766]: E0130 16:23:55.040344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.044933 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:44:09.530110018 +0000 UTC Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082787 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082803 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.082815 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185310 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.185380 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.287835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389724 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389765 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.389808 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491739 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491774 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491782 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.491804 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594625 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594652 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.594660 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697076 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697129 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.697159 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800658 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.800758 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903538 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903566 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:55 crc kubenswrapper[4766]: I0130 16:23:55.903577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:55Z","lastTransitionTime":"2026-01-30T16:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006682 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006747 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.006813 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.039505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:56 crc kubenswrapper[4766]: E0130 16:23:56.039692 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.045082 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:15:12.834885485 +0000 UTC Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.061432 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.077889 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.093110 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.107533 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110122 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110199 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.110252 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.124089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.138711 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.153147 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.169052 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.182780 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.195018 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.206225 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212789 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212838 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212873 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.212888 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.221852 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.250021 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.264670 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.281462 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.298934 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.312572 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316250 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316311 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.316339 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.328726 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:23:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.421967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422042 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.422091 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524760 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.524802 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627315 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627358 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627369 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627385 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.627396 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730834 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.730862 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833759 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833775 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.833787 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935696 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:56 crc kubenswrapper[4766]: I0130 16:23:56.935838 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:56Z","lastTransitionTime":"2026-01-30T16:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038547 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038552 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038750 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038815 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.038578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:57 crc kubenswrapper[4766]: E0130 16:23:57.038893 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039210 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.039266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.045930 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:23:24.147922994 +0000 UTC Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141695 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141742 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141753 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.141780 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245255 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245319 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245335 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.245382 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348171 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348298 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348326 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.348347 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451240 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451288 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.451333 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553252 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553281 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.553293 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656394 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656419 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656446 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.656467 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.758817 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.861997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.862122 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965201 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965220 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:57 crc kubenswrapper[4766]: I0130 16:23:57.965231 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:57Z","lastTransitionTime":"2026-01-30T16:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.039327 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:23:58 crc kubenswrapper[4766]: E0130 16:23:58.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.046318 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:50:29.581401907 +0000 UTC Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067124 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.067145 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.169236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272292 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272349 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272374 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.272390 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375019 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375086 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375134 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.375163 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478139 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478263 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478303 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478336 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.478357 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582208 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582291 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582313 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.582327 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685264 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685276 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685294 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.685306 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788671 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788772 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.788826 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.892508 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893046 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893223 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.893261 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995236 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995284 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995300 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:58 crc kubenswrapper[4766]: I0130 16:23:58.995310 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:58Z","lastTransitionTime":"2026-01-30T16:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.039217 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.039398 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.039661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.039756 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.040079 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:23:59 crc kubenswrapper[4766]: E0130 16:23:59.040160 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.046674 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:32:31.284290467 +0000 UTC Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097725 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097784 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097822 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.097832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200152 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.200171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303791 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.303817 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406631 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406677 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406689 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406704 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.406716 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509024 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509068 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.509108 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611606 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611691 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611713 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.611762 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714398 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.714435 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817676 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817735 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817777 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.817795 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920354 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920392 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920403 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:23:59 crc kubenswrapper[4766]: I0130 16:23:59.920432 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:23:59Z","lastTransitionTime":"2026-01-30T16:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023956 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.023985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.024005 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.038532 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:00 crc kubenswrapper[4766]: E0130 16:24:00.038715 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.047762 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:22:20.179583102 +0000 UTC Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126352 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126413 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.126423 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229551 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229624 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229644 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.229693 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332804 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332869 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332900 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.332914 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.436913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.436971 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.437025 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539113 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539121 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539136 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.539146 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642655 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642758 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.642774 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745459 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745486 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745502 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.745520 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848857 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848910 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.848949 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952762 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952781 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952806 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:00 crc kubenswrapper[4766]: I0130 16:24:00.952830 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:00Z","lastTransitionTime":"2026-01-30T16:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.039497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.039696 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.040005 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:01 crc kubenswrapper[4766]: E0130 16:24:01.040346 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.048338 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:13:43.438421431 +0000 UTC Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055592 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055619 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055634 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.055642 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158605 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158615 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158627 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.158636 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261219 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.261242 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364415 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364474 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364483 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364500 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.364510 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467064 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467078 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.467086 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569723 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.569844 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672690 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672699 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.672725 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.774942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.774996 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.775045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877732 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877795 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.877821 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980700 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980773 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980829 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:01 crc kubenswrapper[4766]: I0130 16:24:01.980847 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:01Z","lastTransitionTime":"2026-01-30T16:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.038652 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:02 crc kubenswrapper[4766]: E0130 16:24:02.038836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.049305 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:16:38.785518667 +0000 UTC Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.056694 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083487 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083792 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.083942 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.084010 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186611 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186678 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.186708 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.289905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290013 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290031 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290055 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.290074 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.394778 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395534 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.395663 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499174 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499370 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499400 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.499426 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601951 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601961 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601976 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.601989 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704883 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704957 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704975 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.704988 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807332 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807361 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.807373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910142 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:02 crc kubenswrapper[4766]: I0130 16:24:02.910155 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:02Z","lastTransitionTime":"2026-01-30T16:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014582 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014643 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014660 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014683 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.014701 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.038909 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.039005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.038949 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039093 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039269 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.039371 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.050421 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:23:21.743220591 +0000 UTC Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116545 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116610 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116637 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.116648 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219931 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219969 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219979 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.219994 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.220024 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322771 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322899 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.322959 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426014 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426062 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426074 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.426105 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.488044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.488251 4766 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.488340 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs podName:de5fecf1-cb2c-4ae2-a240-6f8826f6dac3 nodeName:}" failed. No retries permitted until 2026-01-30 16:25:07.488321207 +0000 UTC m=+162.126278553 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs") pod "network-metrics-daemon-xrldv" (UID: "de5fecf1-cb2c-4ae2-a240-6f8826f6dac3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529080 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529137 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529146 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.529171 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631954 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.631977 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691443 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691463 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.691512 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.711291 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716891 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.716990 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.717021 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.717045 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.732743 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737161 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737448 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.737616 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.754090 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759633 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759780 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759847 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759918 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.759981 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.776581 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780930 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780983 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.780997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.781034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.781048 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.796156 4766 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6a40bef8-b5e4-4d79-9bcd-48caff34a744\\\",\\\"systemUUID\\\":\\\"a00817eb-12ea-49e2-ab4d-6ba5164a8361\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:03 crc kubenswrapper[4766]: E0130 16:24:03.796328 4766 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797832 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797888 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797911 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797941 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.797964 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902035 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902098 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902120 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:03 crc kubenswrapper[4766]: I0130 16:24:03.902136 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:03Z","lastTransitionTime":"2026-01-30T16:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005496 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005507 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005523 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.005536 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.039054 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:04 crc kubenswrapper[4766]: E0130 16:24:04.039218 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.051052 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:41:08.807222536 +0000 UTC Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109011 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109079 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109102 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.109153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212478 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212535 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.212561 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315053 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315110 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315130 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.315164 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418159 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418225 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418238 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.418247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520378 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.520417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623059 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623140 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623163 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623233 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.623285 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725890 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725908 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.725919 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829609 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829705 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829755 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.829773 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933520 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933597 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:04 crc kubenswrapper[4766]: I0130 16:24:04.933608 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:04Z","lastTransitionTime":"2026-01-30T16:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036306 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036322 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036348 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.036363 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038389 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038442 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.038405 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038571 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:05 crc kubenswrapper[4766]: E0130 16:24:05.038821 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.051719 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:14:17.769657777 +0000 UTC Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145379 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145431 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145451 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145511 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.145552 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250154 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.250247 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354901 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354967 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.354982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.355010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.355026 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457864 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457934 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457958 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.457997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.458019 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561153 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561202 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561224 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.561234 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665212 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665229 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665249 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.665266 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768672 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768810 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768858 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.768871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.871769 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872116 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872211 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872296 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.872373 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974482 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974761 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974843 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974919 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:05 crc kubenswrapper[4766]: I0130 16:24:05.974978 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:05Z","lastTransitionTime":"2026-01-30T16:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.038733 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:06 crc kubenswrapper[4766]: E0130 16:24:06.038872 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.052696 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:24:56.509341867 +0000 UTC Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.054883 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.067165 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8812de1d53f3836b6f3fb56ce16a6b4c6eb7f89b6ca031215286a88039bd7c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10053f24063b46c64a9f2244983b985bd68bc9011965d64191f5f489f84031ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.076770 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a25c516-3d8c-4fdb-9425-692ce650f427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48170f23c47aec453cf08f74ac389c1cb871766e7d7f44fcf154214b472bd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6s9kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ddhn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077495 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077526 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077552 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.077563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.088928 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a866f582-e240-4058-a5ab-7c73e33d80fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:22:39Z\\\",\\\"message\\\":\\\"W0130 16:22:29.286425 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:22:29.286748 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769790149 cert, and key in /tmp/serving-cert-1741341846/serving-signer.crt, /tmp/serving-cert-1741341846/serving-signer.key\\\\nI0130 16:22:29.697616 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:22:29.700322 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:22:29.700619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:22:29.703652 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741341846/tls.crt::/tmp/serving-cert-1741341846/tls.key\\\\\\\"\\\\nF0130 16:22:39.899506 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.101240 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bc80898-f7b6-4e82-8da7-1d054381e411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1819b1130927b0792934301a360f7a06a52ac8ce7dbcf343f28e111cb40d386b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c732f5834b204a4085a266c427e4c0f79b0f5f4319573e09272b32ef24722e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b538298439bfdfeb849aad966aa0801bef02442b90946d7942e437244fe99fb9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.113170 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.123801 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cc08cda8676cdb74f80078c61470230b309daa825c6bb1a07499647260de120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.137089 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca5a63-8303-4e36-8733-74136416819f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0a296aafa84488c77418bb8d4b945f5cec6783bedba7e498c2dfb3f54c39ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0375b1004edcc905daa64587295fbcb381263d97a218c568bdc2028362b2b8fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.156858 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73113a73-bf9b-47b1-9053-8dff1c9ea225\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://108f1ca5a7cf1c4f0665b5b82b00c8b911dfe22582334836d3bc8a5afe17a1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfcc8c946ea5c547539386c797026307ba8bd235fd4694341695882ec2442702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://332a9a9c49123e23601444adafca95852030d0e19a682316100bc45b0f849209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374cae6e4bbbb88f2f6fc9093a4f5597b2afeae8361a9a76ccf384cae5d8b2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01fb269c6fb534b4e45e60f3409c21e9700bc901eda3f975e990f77a9286838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45a10d4089665cdb929797e9342a2cbcb49cf6734a3325a26037a23551bcf2de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dd974892c65b46b3e601e9d901a9a9888dcbe5d1f734b282938d46f297ffd3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c91fed698fcd080bb96cfb78c277c295568df8d5eb52e57c4656620822f6fac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.168686 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4117fac035c8b1b6c74777dad5d27f796613809156faac31ecc300015b3adfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.177334 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flxfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bd09169-41b7-4eb3-80a5-a842e79f7d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e33f1fd3759539396d477bf93a581a0b0a5fbcaa5bce27c94fc1b22caa89e3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnw6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flxfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179531 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179567 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179579 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179595 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.179606 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.189311 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5feae404-d53f-4bf5-af27-07a7ce350594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bd3cad42b4c959f5cb67492e8be64db7d98c24e44958bdedc08dbdd5fb9bc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cd4d835e7db624be9961818c2531e2eab6cfa661faed78c9c14942abec0512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rc5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rg9cf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.201119 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xrldv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp9nh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xrldv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.212887 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f402ec3a-e31d-4f62-80e5-fa9bd9f7ac15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ffcd0c2a62c4b64c82096acad643c6c2baeb7a4a36b666ae69123551f364fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51487101f0eb065018293732bef4fb466fa5a1c7bcb05f905fdd52f3593119c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af645998dd1c6804061825910fa2b9b2446abe356b9c90d83546dddbcf6133a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de657c840f4dff63741fe42c702f457d123e0ba02f6602b75afe3271a7b6886\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.224611 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.234565 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vhmx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3a8d75a-1f1e-416a-a96b-c774ffdc24b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25487640d1818a351565819c2ca43d0919cc62273554633f92c90c31bea30c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctb7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vhmx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.253066 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6a299e8-188d-4777-bb82-a0994feabcff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:41Z\\\",\\\"message\\\":\\\"_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.182\\\\\\\", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0130 16:23:41.758830 6812 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4psqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-54ngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.268248 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8da0c398-554f-47ad-aada-70e4b5c9ec98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5545aa724616d7714b8886c8505d44241291240e7ac7afd4a85192542a09e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80b954cbb49f53d28b146c147762f0a9d55f8c8ba127f4f0175d9c8364b103c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b8b2428318d64dd57487ba49a0a64e1eff62888f6d2175a22055dbdc88dbe47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2210df5ed18395c4f01c09aaa763a07f0a878b6b5a79052542a73808aae65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de1eb71bf40678046c02b54ae5287400f592016820db50dc75c7bb4c8ef62fcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b51d0d456cc2c46ab76af47e1ed595686370a87b8379cde90204be3c19107b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73385e6c95877505c66202faccf2256742edc505b688b1fef0e8679c8b68fc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:22:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dv5xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vvzk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.280308 4766 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-l6xdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a74bc5e-af98-4849-820c-7056caabc485\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:22:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:23:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:23:33Z\\\",\\\"message\\\":\\\"2026-01-30T16:22:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d\\\\n2026-01-30T16:22:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1da83fa5-bc65-441b-9b89-edba1769ad7d to /host/opt/cni/bin/\\\\n2026-01-30T16:22:48Z [verbose] multus-daemon started\\\\n2026-01-30T16:22:48Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:23:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:22:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25lp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:22:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-l6xdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:24:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281132 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281148 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281164 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.281198 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383503 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383537 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383561 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.383572 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.485982 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486060 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486085 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486173 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.486307 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.588945 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589286 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589386 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589488 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.589577 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693145 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693203 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693226 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.693236 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795318 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795368 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795377 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795390 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.795399 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897674 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897718 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897727 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897740 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:06 crc kubenswrapper[4766]: I0130 16:24:06.897749 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:06Z","lastTransitionTime":"2026-01-30T16:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.000928 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.000997 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001012 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001036 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.001055 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.038900 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.038945 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.039108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039316 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:07 crc kubenswrapper[4766]: E0130 16:24:07.039752 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.053550 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:15:20.764445454 +0000 UTC Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104339 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104492 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104569 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.104591 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207893 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.207958 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311118 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311172 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311213 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311231 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.311243 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.413966 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414029 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414038 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414052 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.414061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516484 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516501 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.516518 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619646 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619885 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.619917 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723131 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723327 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723366 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.723378 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826092 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826103 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.826138 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929559 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929621 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929639 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:07 crc kubenswrapper[4766]: I0130 16:24:07.929682 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:07Z","lastTransitionTime":"2026-01-30T16:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032730 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032800 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.032835 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.039304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:08 crc kubenswrapper[4766]: E0130 16:24:08.039402 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.053936 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:19:24.159095681 +0000 UTC Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134684 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134697 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134714 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.134726 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.236995 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237083 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237107 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237138 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.237161 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339779 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339844 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339870 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.339887 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442347 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442388 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442396 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.442417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544489 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544498 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544514 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.544522 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647802 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647813 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.647842 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750879 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750952 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.750974 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.751002 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.751025 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853796 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853828 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.853859 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957075 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957099 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957119 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:08 crc kubenswrapper[4766]: I0130 16:24:08.957134 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:08Z","lastTransitionTime":"2026-01-30T16:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.039323 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039356 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039514 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:09 crc kubenswrapper[4766]: E0130 16:24:09.039565 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.054981 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:56:25.473719568 +0000 UTC Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060070 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060097 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.060107 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163745 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163845 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163876 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.163899 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267034 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267082 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267091 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267106 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.267118 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370351 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370399 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370408 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370423 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.370432 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473437 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473504 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473521 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473574 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.473588 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.575998 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576044 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576056 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.576083 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678472 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678536 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678553 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.678563 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781825 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.781872 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885217 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885343 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885360 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885383 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.885397 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.987962 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988022 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988033 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988049 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:09 crc kubenswrapper[4766]: I0130 16:24:09.988061 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:09Z","lastTransitionTime":"2026-01-30T16:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.038743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:10 crc kubenswrapper[4766]: E0130 16:24:10.038946 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.039941 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:10 crc kubenswrapper[4766]: E0130 16:24:10.040310 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.056148 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:22:36.364555132 +0000 UTC Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090275 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090344 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090426 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.090440 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193831 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193895 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193938 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.193959 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297341 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297391 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297406 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297427 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.297442 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401041 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401125 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401141 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401166 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.401237 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504117 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504191 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504230 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.504243 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607550 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607594 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607657 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.607670 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.711944 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712018 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712037 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712063 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.712081 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815324 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815382 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815405 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.815417 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.917988 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918065 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918123 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918150 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:10 crc kubenswrapper[4766]: I0130 16:24:10.918164 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:10Z","lastTransitionTime":"2026-01-30T16:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020852 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020921 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020932 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020948 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.020960 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038351 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.038479 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038480 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.038556 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.038697 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:11 crc kubenswrapper[4766]: E0130 16:24:11.039156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.056948 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:18:55.626282551 +0000 UTC Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123548 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123576 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123604 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.123625 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226454 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226512 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226524 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226542 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.226555 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330913 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330970 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.330987 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.331010 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.331027 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433331 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433364 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433371 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.433393 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534807 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534850 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534862 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.534871 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637818 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637835 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637855 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.637869 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740645 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740707 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740719 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740733 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.740744 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842801 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842841 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842853 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842868 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.842879 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946722 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946798 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946851 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:11 crc kubenswrapper[4766]: I0130 16:24:11.946874 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:11Z","lastTransitionTime":"2026-01-30T16:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.039016 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:12 crc kubenswrapper[4766]: E0130 16:24:12.039137 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049586 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049618 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049626 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049636 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.049645 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.057979 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:53:21.07565126 +0000 UTC Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151630 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151685 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151698 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151715 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.151728 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254612 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254640 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254649 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254667 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.254679 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356805 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356837 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356846 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356859 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.356867 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458926 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458968 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.458985 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.459000 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.459010 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561384 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561442 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561458 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561476 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.561487 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663622 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663653 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663664 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663679 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.663689 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766126 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766216 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766227 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766241 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.766250 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869420 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869466 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869481 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869494 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.869502 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972206 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972245 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972256 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972271 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:12 crc kubenswrapper[4766]: I0130 16:24:12.972283 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:12Z","lastTransitionTime":"2026-01-30T16:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039054 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039372 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.039445 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039504 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:13 crc kubenswrapper[4766]: E0130 16:24:13.039598 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.058335 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:03:54.713513657 +0000 UTC Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075381 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075422 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075434 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075450 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.075459 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178575 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178750 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178783 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178812 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.178834 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282509 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282580 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282602 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282629 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.282650 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385218 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385254 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385265 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385277 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.385287 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487734 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487785 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487799 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.487832 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590073 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590111 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590127 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590143 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.590153 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696072 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696155 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696214 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696237 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.696248 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799819 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799882 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799905 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799929 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.799946 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902517 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902675 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902686 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902737 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:13 crc kubenswrapper[4766]: I0130 16:24:13.902750 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:13Z","lastTransitionTime":"2026-01-30T16:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004712 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004746 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004754 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004766 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.004774 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.038799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:14 crc kubenswrapper[4766]: E0130 16:24:14.038982 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.058508 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:48:31.022026758 +0000 UTC Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106823 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106881 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106920 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.106934 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133820 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133897 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133914 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133937 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.133952 4766 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:24:14Z","lastTransitionTime":"2026-01-30T16:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.195295 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm"] Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.195857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.198639 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.198995 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.199548 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.200003 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.214625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.214604561 podStartE2EDuration="32.214604561s" podCreationTimestamp="2026-01-30 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.214502598 +0000 UTC m=+108.852459964" watchObservedRunningTime="2026-01-30 16:24:14.214604561 +0000 UTC m=+108.852561917" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.243944 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.243920082 podStartE2EDuration="12.243920082s" podCreationTimestamp="2026-01-30 16:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.243787418 +0000 UTC m=+108.881744804" watchObservedRunningTime="2026-01-30 16:24:14.243920082 +0000 UTC m=+108.881877448" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.289517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rg9cf" podStartSLOduration=88.289484617 podStartE2EDuration="1m28.289484617s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.28740695 +0000 UTC m=+108.925364326" watchObservedRunningTime="2026-01-30 16:24:14.289484617 +0000 UTC m=+108.927441993" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.290153 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-flxfz" podStartSLOduration=89.290140685 podStartE2EDuration="1m29.290140685s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.275838124 +0000 UTC m=+108.913795510" watchObservedRunningTime="2026-01-30 16:24:14.290140685 +0000 UTC m=+108.928098081" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310516 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.310803 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.323327 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.323310381 podStartE2EDuration="57.323310381s" podCreationTimestamp="2026-01-30 16:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.322584562 +0000 UTC m=+108.960541908" watchObservedRunningTime="2026-01-30 16:24:14.323310381 +0000 UTC m=+108.961267717" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.345339 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vhmx5" podStartSLOduration=89.345318613 podStartE2EDuration="1m29.345318613s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.345252971 +0000 UTC m=+108.983210327" watchObservedRunningTime="2026-01-30 16:24:14.345318613 +0000 UTC m=+108.983275959" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.397862 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vvzk9" podStartSLOduration=89.397843218 podStartE2EDuration="1m29.397843218s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.384804192 +0000 UTC m=+109.022761538" watchObservedRunningTime="2026-01-30 16:24:14.397843218 +0000 UTC m=+109.035800564" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.398201 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-l6xdr" podStartSLOduration=89.398196028 podStartE2EDuration="1m29.398196028s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.397742026 +0000 UTC m=+109.035699372" watchObservedRunningTime="2026-01-30 16:24:14.398196028 +0000 UTC m=+109.036153374" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411432 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.411512 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10b6cd3c-7511-4776-adb7-f48f2bdee155-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.412544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10b6cd3c-7511-4776-adb7-f48f2bdee155-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.425253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10b6cd3c-7511-4776-adb7-f48f2bdee155-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.434641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10b6cd3c-7511-4776-adb7-f48f2bdee155-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpghm\" (UID: \"10b6cd3c-7511-4776-adb7-f48f2bdee155\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.453774 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podStartSLOduration=89.453758516 podStartE2EDuration="1m29.453758516s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.453286854 +0000 UTC m=+109.091244200" watchObservedRunningTime="2026-01-30 16:24:14.453758516 +0000 UTC m=+109.091715862" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.471245 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.471227523 podStartE2EDuration="1m29.471227523s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.470583516 +0000 UTC m=+109.108540882" watchObservedRunningTime="2026-01-30 16:24:14.471227523 +0000 UTC m=+109.109184869" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.501750 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.501728757 podStartE2EDuration="1m23.501728757s" podCreationTimestamp="2026-01-30 16:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:14.48573607 +0000 UTC m=+109.123693416" watchObservedRunningTime="2026-01-30 16:24:14.501728757 +0000 UTC m=+109.139686103" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.515067 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" Jan 30 16:24:14 crc kubenswrapper[4766]: I0130 16:24:14.544905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" event={"ID":"10b6cd3c-7511-4776-adb7-f48f2bdee155","Type":"ContainerStarted","Data":"cb019ecf96bad4457d0528b49e7c9763beec3d52ab36ea07c8241d8e708aaede"} Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039369 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.039553 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.039588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.039754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:15 crc kubenswrapper[4766]: E0130 16:24:15.040414 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.059328 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:38:32.504904334 +0000 UTC Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.059433 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.069528 4766 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.549109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" event={"ID":"10b6cd3c-7511-4776-adb7-f48f2bdee155","Type":"ContainerStarted","Data":"b39a72a60a8f59aec3377b15d145a1e62af0582fc6dab5efefa03cad37531e0f"} Jan 30 16:24:15 crc kubenswrapper[4766]: I0130 16:24:15.561863 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpghm" podStartSLOduration=90.561847517 podStartE2EDuration="1m30.561847517s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:15.56160925 +0000 UTC m=+110.199566586" watchObservedRunningTime="2026-01-30 16:24:15.561847517 +0000 UTC m=+110.199804853" Jan 30 16:24:16 crc kubenswrapper[4766]: I0130 16:24:16.038563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:16 crc kubenswrapper[4766]: E0130 16:24:16.041174 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038949 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039145 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:17 crc kubenswrapper[4766]: I0130 16:24:17.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039297 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:17 crc kubenswrapper[4766]: E0130 16:24:17.039379 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:18 crc kubenswrapper[4766]: I0130 16:24:18.039003 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:18 crc kubenswrapper[4766]: E0130 16:24:18.039155 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.038877 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.038892 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039298 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039453 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.039759 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.039851 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.570510 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571244 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/0.log" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571302 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" exitCode=1 Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082"} Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571368 4766 scope.go:117] "RemoveContainer" containerID="5c5c8bed30a9197c6446a5914f0c049ad57450f23b502d18d360968bd9a2c008" Jan 30 16:24:19 crc kubenswrapper[4766]: I0130 16:24:19.571855 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:24:19 crc kubenswrapper[4766]: E0130 16:24:19.572088 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:24:20 crc kubenswrapper[4766]: I0130 16:24:20.039316 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:20 crc kubenswrapper[4766]: E0130 16:24:20.039829 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:20 crc kubenswrapper[4766]: I0130 16:24:20.577112 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039697 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.039786 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040646 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.040799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:21 crc kubenswrapper[4766]: I0130 16:24:21.041132 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:21 crc kubenswrapper[4766]: E0130 16:24:21.041421 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-54ngm_openshift-ovn-kubernetes(d6a299e8-188d-4777-bb82-a0994feabcff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" Jan 30 16:24:22 crc kubenswrapper[4766]: I0130 16:24:22.039610 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:22 crc kubenswrapper[4766]: E0130 16:24:22.039873 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039083 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039141 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:23 crc kubenswrapper[4766]: I0130 16:24:23.039230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039412 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039495 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:23 crc kubenswrapper[4766]: E0130 16:24:23.039550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:24 crc kubenswrapper[4766]: I0130 16:24:24.038924 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:24 crc kubenswrapper[4766]: E0130 16:24:24.039228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038484 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038534 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.038906 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.040550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:25 crc kubenswrapper[4766]: I0130 16:24:25.038559 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:25 crc kubenswrapper[4766]: E0130 16:24:25.041119 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:26 crc kubenswrapper[4766]: I0130 16:24:26.039467 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.040891 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.080414 4766 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 16:24:26 crc kubenswrapper[4766]: E0130 16:24:26.130883 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039168 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039217 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039427 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039532 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:27 crc kubenswrapper[4766]: I0130 16:24:27.039255 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:27 crc kubenswrapper[4766]: E0130 16:24:27.039666 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:28 crc kubenswrapper[4766]: I0130 16:24:28.039327 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:28 crc kubenswrapper[4766]: E0130 16:24:28.039507 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038372 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038510 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038606 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:29 crc kubenswrapper[4766]: I0130 16:24:29.038671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:29 crc kubenswrapper[4766]: E0130 16:24:29.038727 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:30 crc kubenswrapper[4766]: I0130 16:24:30.038978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:30 crc kubenswrapper[4766]: E0130 16:24:30.039222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039007 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039273 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:31 crc kubenswrapper[4766]: I0130 16:24:31.039007 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039399 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.039461 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:31 crc kubenswrapper[4766]: E0130 16:24:31.132403 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.039075 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.039604 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:32 crc kubenswrapper[4766]: E0130 16:24:32.039859 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.622245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:24:32 crc kubenswrapper[4766]: I0130 16:24:32.622751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c"} Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.038781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.038940 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.038951 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.039304 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:33 crc kubenswrapper[4766]: I0130 16:24:33.039874 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:33 crc kubenswrapper[4766]: E0130 16:24:33.040067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:34 crc kubenswrapper[4766]: I0130 16:24:34.040480 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:34 crc kubenswrapper[4766]: E0130 16:24:34.040725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039084 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039393 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039522 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:35 crc kubenswrapper[4766]: I0130 16:24:35.039887 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:35 crc kubenswrapper[4766]: E0130 16:24:35.039983 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.039555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:36 crc kubenswrapper[4766]: E0130 16:24:36.042784 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.043484 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:24:36 crc kubenswrapper[4766]: E0130 16:24:36.133620 4766 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.639544 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.642733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerStarted","Data":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} Jan 30 16:24:36 crc kubenswrapper[4766]: I0130 16:24:36.643273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042534 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.043723 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.042618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.044167 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.044460 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.051089 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podStartSLOduration=112.051053975 podStartE2EDuration="1m52.051053975s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:36.673762366 +0000 UTC m=+131.311719722" watchObservedRunningTime="2026-01-30 16:24:37.051053975 +0000 UTC m=+131.689011321" Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.051596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:24:37 crc kubenswrapper[4766]: I0130 16:24:37.645656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:37 crc kubenswrapper[4766]: E0130 16:24:37.646258 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:38 crc kubenswrapper[4766]: I0130 16:24:38.039332 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:38 crc kubenswrapper[4766]: E0130 16:24:38.039557 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039412 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039555 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:39 crc kubenswrapper[4766]: I0130 16:24:39.039404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039762 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:39 crc kubenswrapper[4766]: E0130 16:24:39.039817 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:40 crc kubenswrapper[4766]: I0130 16:24:40.039329 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:40 crc kubenswrapper[4766]: E0130 16:24:40.039489 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038574 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:41 crc kubenswrapper[4766]: I0130 16:24:41.038608 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.038657 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.038809 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xrldv" podUID="de5fecf1-cb2c-4ae2-a240-6f8826f6dac3" Jan 30 16:24:41 crc kubenswrapper[4766]: E0130 16:24:41.039067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.038700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.042941 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:24:42 crc kubenswrapper[4766]: I0130 16:24:42.045607 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038658 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038719 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.038811 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.041454 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.041732 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.042580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:24:43 crc kubenswrapper[4766]: I0130 16:24:43.042768 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.636128 4766 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.688739 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689400 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689619 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.689880 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.690551 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.690559 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.691702 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.692142 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.692399 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.693117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.696072 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.696827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.697365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.698087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.699048 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.704671 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.707833 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.708787 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.709016 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.708777 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.709962 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.710041 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.716127 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.728595 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729161 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729534 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.729575 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730222 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730550 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730737 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.730942 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.731165 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737552 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.731957 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.732392 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.734042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.734793 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735857 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735856 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.735960 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.738493 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.739116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.739127 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736064 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736262 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736361 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736430 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736516 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736581 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736590 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736633 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736647 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736648 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736734 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736803 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736851 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736906 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736959 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.736964 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737016 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.741515 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.742569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737026 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737077 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737134 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737193 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.737227 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747621 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747738 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.747747 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.748084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.748968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.750205 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.751275 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765699 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765870 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.765947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.766076 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.766775 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.767166 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772427 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772584 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772866 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.774609 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.777489 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.777640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.772463 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.785764 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.785996 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786107 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786234 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786600 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786847 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786980 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787098 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787409 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.786987 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787573 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787629 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.787686 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.788220 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789082 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.789786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.790087 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.794329 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.796944 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.797752 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pr8gz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798083 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798146 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798410 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798441 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798292 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798704 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.798891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.799293 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.799423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.800125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.801125 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.801982 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.804973 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.806473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.806770 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.808799 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810911 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809032 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809141 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809744 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.811632 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.809785 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810106 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.812993 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810480 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810524 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.810604 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.813490 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.817317 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.838121 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.842117 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.842978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.843444 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.854072 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.856505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.861319 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.864071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.866650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.866750 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.867925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.867979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868007 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868106 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868901 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.868997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869061 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.869169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870588 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870625 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.870979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871131 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871315 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.871370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.872987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.873067 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:44 crc kubenswrapper[4766]: E0130 16:24:44.873623 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.373601298 +0000 UTC m=+140.011558644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874188 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874529 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874613 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.874954 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875120 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875799 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875958 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.875998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.876019 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.876965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-config\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.877983 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879355 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.879996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.883262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-trusted-ca\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.885089 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.885539 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.888489 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.889759 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910081 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-serving-cert\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.910567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.913087 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.913569 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.914118 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.915015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.917749 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.918500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.918970 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.919954 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.920154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.920744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.921259 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.922238 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.922783 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.924519 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926378 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.926778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.927725 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.928803 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.930215 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.930959 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.932891 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.934044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.934225 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.935027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.936741 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.937253 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.937322 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.938086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.938560 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.939535 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.940617 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.942398 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.942723 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.944863 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.945987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.947047 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.948149 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.949153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.950274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.952078 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984316 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984388 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984402 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984416 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984457 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984482 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984496 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.984508 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.985482 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.985895 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986152 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986436 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:44 crc kubenswrapper[4766]: E0130 16:24:44.986576 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.486533012 +0000 UTC m=+140.124490358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986456 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986636 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986676 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986695 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986708 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986705 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986874 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-audit-dir\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.986951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.987589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988116 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.988780 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.989404 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-config\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/587fc124-b506-4535-b8d2-1d0f6c91cfb9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.990635 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991758 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.991993 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992082 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992197 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992278 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992309 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992397 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992431 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992897 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992924 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.992979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993017 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993070 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993129 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993155 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993264 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993429 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993787 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993852 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993907 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993924 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-client\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.993998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994027 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994061 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994136 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994218 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994249 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c1191290-07ee-40c4-85e8-59545986d7db-node-pullsecrets\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.994256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:44 crc kubenswrapper[4766]: I0130 16:24:44.995030 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.000229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-etcd-serving-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.000785 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8acca84e-2800-4a20-b3e8-84e021d1c001-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.001755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-audit\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.001961 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002494 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.002804 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.502772334 +0000 UTC m=+140.140729680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.002907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-dir\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.004683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005165 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-images\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acca84e-2800-4a20-b3e8-84e021d1c001-config\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.005899 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010877 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.008766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010395 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.006591 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-image-import-ca\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.010837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.007530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-serving-cert\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.011695 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-encryption-config\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012132 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012471 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012640 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012730 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.012991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013045 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013087 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013483 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013680 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-auth-proxy-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8527eb-86cc-45de-8821-7b80f37422d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.013858 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-encryption-config\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71148f4c-0b84-45c4-911c-0ec4b06cf710-audit-policies\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014668 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-config\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.014770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.015221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-machine-approver-tls\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.015558 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.016280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71148f4c-0b84-45c4-911c-0ec4b06cf710-serving-cert\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.016776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1191290-07ee-40c4-85e8-59545986d7db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018031 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8527eb-86cc-45de-8821-7b80f37422d0-serving-cert\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c1191290-07ee-40c4-85e8-59545986d7db-etcd-client\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.018964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.019432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc11d4d-16d8-43a2-9648-e0b833e8824a-metrics-tls\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.020082 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-serving-cert\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.024692 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.028245 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-92gpq"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.029273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.032607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.053260 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.072354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.092601 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.112703 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.114469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.114612 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.614579118 +0000 UTC m=+140.252536474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115385 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115497 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115969 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.115973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6eef76-87a0-459c-b2eb-61e06ae7386d-service-ca-bundle\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-config\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.116979 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117012 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117037 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117097 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117328 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117396 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117414 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117815 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117831 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.117944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118055 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118077 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118133 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118196 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118523 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118564 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118615 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118909 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118927 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.118970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119040 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119143 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119162 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.119552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/752a21cf-698e-45b3-91e2-c00b0e82d991-config\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31501ea8-c8ad-4854-bfda-157a49fd0b39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb3e5af-901e-42db-b01e-895e2d6c8171-config\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120575 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.120679 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.62066318 +0000 UTC m=+140.258620526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.120764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-default-certificate\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.121659 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c26a8d-0deb-4754-b815-4402e2aa5455-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.122539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb3e5af-901e-42db-b01e-895e2d6c8171-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.123978 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-client\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-stats-auth\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124162 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124199 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c26a8d-0deb-4754-b815-4402e2aa5455-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.124826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/752a21cf-698e-45b3-91e2-c00b0e82d991-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.125634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.125779 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af6eef76-87a0-459c-b2eb-61e06ae7386d-metrics-certs\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.126109 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.127286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.132264 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.137651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-etcd-service-ca\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.153302 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.172602 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.183005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-serving-cert\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.192449 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.212661 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220502 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.220709 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.720680951 +0000 UTC m=+140.358638307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220919 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.220943 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221273 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221336 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221395 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221419 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-plugins-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-registration-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.221923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.221978 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.721947714 +0000 UTC m=+140.359905230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-csi-data-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222073 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-socket-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222235 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222314 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.222084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-mountpoint-dir\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.232420 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.252310 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.262583 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da9e3070-71fe-41f6-8549-90d97f03c16e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.273036 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.301251 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.312097 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.321443 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da9e3070-71fe-41f6-8549-90d97f03c16e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.323676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.323840 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.823813454 +0000 UTC m=+140.461770810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.324136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.324504 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.824493993 +0000 UTC m=+140.462451339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.352947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.357936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-images\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.373354 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.393068 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.403997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-proxy-tls\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.413989 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.425837 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.426139 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.926095795 +0000 UTC m=+140.564053281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.426384 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.426848 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:45.926830954 +0000 UTC m=+140.564788300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.432463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.452577 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.488807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfsvc\" (UniqueName: \"kubernetes.io/projected/2e83d3d7-f71f-47ab-a085-8d62e6b30f7d-kube-api-access-dfsvc\") pod \"console-operator-58897d9998-gtfgx\" (UID: \"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d\") " pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.492721 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.503651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31501ea8-c8ad-4854-bfda-157a49fd0b39-proxy-tls\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.512947 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.527561 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.527719 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.528067 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.027972025 +0000 UTC m=+140.665929371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.528629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.529113 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.029088005 +0000 UTC m=+140.667045481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.547636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"route-controller-manager-6576b87f9c-mfclt\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.552717 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.572823 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.580870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33323546-6929-4c9c-a0a3-44842b9897b4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.592105 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.604243 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33323546-6929-4c9c-a0a3-44842b9897b4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.612560 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.629967 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.630684 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.130345178 +0000 UTC m=+140.768302544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.630879 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.631958 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.13194193 +0000 UTC m=+140.769899296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.632255 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.655314 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.664910 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.672588 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.692621 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.698776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.714721 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.722401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17af2b06-620b-4126-ac9e-f0de24c9f6bb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.732304 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.732832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.232776113 +0000 UTC m=+140.870733459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.733042 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.734036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.734642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.234624982 +0000 UTC m=+140.872582328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.741266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gtfgx"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.753299 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.766644 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17af2b06-620b-4126-ac9e-f0de24c9f6bb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.774849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.792931 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.804124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.812325 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.816764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.832917 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.835594 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.835789 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.335760212 +0000 UTC m=+140.973717558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.836119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.836861 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.336833622 +0000 UTC m=+140.974790988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.860066 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.868397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.872533 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.887901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.895511 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.903660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.913894 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.924039 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.924886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/928166c7-a17c-4693-9ae5-1c8aa4050176-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.930391 4766 request.go:700] Waited for 1.010207713s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.933031 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.938248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.938466 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.438432444 +0000 UTC m=+141.076389790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.939007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:45 crc kubenswrapper[4766]: E0130 16:24:45.939545 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.439490292 +0000 UTC m=+141.077447638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.954415 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.963231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.978653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:24:45 crc kubenswrapper[4766]: I0130 16:24:45.993207 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.012821 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.032266 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.039992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.040369 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.540316664 +0000 UTC m=+141.178274120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.041134 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.041573 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.541555717 +0000 UTC m=+141.179513063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.051936 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.092749 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.107407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/454fa304-47eb-48d6-9fec-406888874f6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.113034 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.121688 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.131326 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.141796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.142136 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.642107402 +0000 UTC m=+141.280064748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.142433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.142832 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.642816371 +0000 UTC m=+141.280773717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.151447 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.172416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.192346 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.195952 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-serving-cert\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.212208 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221645 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221751 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert podName:9b23bdbc-d2d1-4404-8455-4e877764c72d nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.72172425 +0000 UTC m=+141.359681776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert") pod "olm-operator-6b444d44fb-hjlfz" (UID: "9b23bdbc-d2d1-4404-8455-4e877764c72d") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221837 4766 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221910 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert podName:bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.721882264 +0000 UTC m=+141.359839630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert") pod "ingress-canary-hfk7g" (UID: "bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221949 4766 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.221997 4766 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222049 4766 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222011 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics podName:cdbd0f5d-e6fb-4960-a928-7a5dcc399239 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.721996997 +0000 UTC m=+141.359954353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics") pod "marketplace-operator-79b997595-wcmvb" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222211 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca podName:cdbd0f5d-e6fb-4960-a928-7a5dcc399239 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722137711 +0000 UTC m=+141.360095067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca") pod "marketplace-operator-79b997595-wcmvb" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222241 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume podName:6289d893-d357-4aab-a2e9-389a422ebaa5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722228473 +0000 UTC m=+141.360185829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume") pod "dns-default-lnxcr" (UID: "6289d893-d357-4aab-a2e9-389a422ebaa5") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222330 4766 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.222411 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config podName:7082a4c2-c998-4e1c-8264-2bafcd96d0c1 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.722401417 +0000 UTC m=+141.360358773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config") pod "service-ca-operator-777779d784-vpgtw" (UID: "7082a4c2-c998-4e1c-8264-2bafcd96d0c1") : failed to sync configmap cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223097 4766 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223148 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls podName:6289d893-d357-4aab-a2e9-389a422ebaa5 nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.723135127 +0000 UTC m=+141.361092493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls") pod "dns-default-lnxcr" (UID: "6289d893-d357-4aab-a2e9-389a422ebaa5") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223191 4766 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.223223 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert podName:9b23bdbc-d2d1-4404-8455-4e877764c72d nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.72321535 +0000 UTC m=+141.361172706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert") pod "olm-operator-6b444d44fb-hjlfz" (UID: "9b23bdbc-d2d1-4404-8455-4e877764c72d") : failed to sync secret cache: timed out waiting for the condition Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.233409 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.244082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.245410 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.745378898 +0000 UTC m=+141.383336264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.251430 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.279511 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.292910 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.311890 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.333496 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.346850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.347262 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.847246469 +0000 UTC m=+141.485203815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.351535 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.372479 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.401253 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.440568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.442770 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.448141 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.448294 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.948272586 +0000 UTC m=+141.586229942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.449014 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.449448 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:46.949432857 +0000 UTC m=+141.587390213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.451650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.473374 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.493020 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.512780 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.532126 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.550799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.551025 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.050987649 +0000 UTC m=+141.688945005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.551808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.552221 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.552317 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.052307484 +0000 UTC m=+141.690264830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.572551 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.623366 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.632323 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.638351 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d25l\" (UniqueName: \"kubernetes.io/projected/71148f4c-0b84-45c4-911c-0ec4b06cf710-kube-api-access-4d25l\") pod \"apiserver-7bbb656c7d-zps75\" (UID: \"71148f4c-0b84-45c4-911c-0ec4b06cf710\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.653059 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.653666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.653998 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.153937397 +0000 UTC m=+141.791894743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.654561 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.655068 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.155053067 +0000 UTC m=+141.793010413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.672102 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.692427 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.696320 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerStarted","Data":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.696376 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerStarted","Data":"777f165aaa35e8debb71a11164cf2e0013257285fafc5c165738c7722a8711a4"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.697846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" event={"ID":"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d","Type":"ContainerStarted","Data":"e092584838521da9e178559d35b263041054d50b5103e999ef7b3878e7fc6d19"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.697900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" event={"ID":"2e83d3d7-f71f-47ab-a085-8d62e6b30f7d","Type":"ContainerStarted","Data":"7e5832395019d10128aad8c35d22d08e4bc20e98146fbd6ed4f59301d7c82dc2"} Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.698172 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.700775 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-gtfgx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.700825 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podUID="2e83d3d7-f71f-47ab-a085-8d62e6b30f7d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.726301 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j868\" (UniqueName: \"kubernetes.io/projected/c1191290-07ee-40c4-85e8-59545986d7db-kube-api-access-2j868\") pod \"apiserver-76f77b778f-c75qp\" (UID: \"c1191290-07ee-40c4-85e8-59545986d7db\") " pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.753777 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.756678 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.256637719 +0000 UTC m=+141.894595075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.756904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757030 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757124 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757216 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.757674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.257654216 +0000 UTC m=+141.895611602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.757944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6289d893-d357-4aab-a2e9-389a422ebaa5-config-volume\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.758393 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.758883 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-config\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.760956 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-srv-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.761635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.762398 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9b23bdbc-d2d1-4404-8455-4e877764c72d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.764654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6289d893-d357-4aab-a2e9-389a422ebaa5-metrics-tls\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.773564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.778027 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"controller-manager-879f6c89f-dgkvz\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.792521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.802762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-cert\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.812378 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.835323 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.850824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.853749 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.858856 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.358825427 +0000 UTC m=+141.996782813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.858708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.859606 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.860302 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.360275576 +0000 UTC m=+141.998232922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.918291 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.928848 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxn7\" (UniqueName: \"kubernetes.io/projected/8acca84e-2800-4a20-b3e8-84e021d1c001-kube-api-access-fzxn7\") pod \"machine-api-operator-5694c8668f-jn8dp\" (UID: \"8acca84e-2800-4a20-b3e8-84e021d1c001\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.950141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqv9\" (UniqueName: \"kubernetes.io/projected/0fd41a92-ef77-4a02-bd2b-089d2edb3cf4-kube-api-access-dbqv9\") pod \"openshift-config-operator-7777fb866f-7j765\" (UID: \"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.950412 4766 request.go:700] Waited for 1.947146897s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.954611 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.956296 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn75m\" (UniqueName: \"kubernetes.io/projected/0d8527eb-86cc-45de-8821-7b80f37422d0-kube-api-access-vn75m\") pod \"authentication-operator-69f744f599-txtwn\" (UID: \"0d8527eb-86cc-45de-8821-7b80f37422d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.961733 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.962091 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.462060033 +0000 UTC m=+142.100017379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.962703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:46 crc kubenswrapper[4766]: E0130 16:24:46.963096 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.463087061 +0000 UTC m=+142.101044407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.964169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5sxg\" (UniqueName: \"kubernetes.io/projected/d9f3a679-bd83-4e31-aad4-0bd228e96c33-kube-api-access-l5sxg\") pod \"downloads-7954f5f757-254pk\" (UID: \"d9f3a679-bd83-4e31-aad4-0bd228e96c33\") " pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.980626 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd67z\" (UniqueName: \"kubernetes.io/projected/2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3-kube-api-access-kd67z\") pod \"machine-approver-56656f9798-6ndwq\" (UID: \"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.990087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:46 crc kubenswrapper[4766]: I0130 16:24:46.995702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z958l\" (UniqueName: \"kubernetes.io/projected/3dc11d4d-16d8-43a2-9648-e0b833e8824a-kube-api-access-z958l\") pod \"dns-operator-744455d44c-vzmxm\" (UID: \"3dc11d4d-16d8-43a2-9648-e0b833e8824a\") " pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.008823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jhr\" (UniqueName: \"kubernetes.io/projected/587fc124-b506-4535-b8d2-1d0f6c91cfb9-kube-api-access-l9jhr\") pod \"cluster-samples-operator-665b6dd947-2h92f\" (UID: \"587fc124-b506-4535-b8d2-1d0f6c91cfb9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.015001 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.035644 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.037556 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.050134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.053487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.065486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.065645 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.565616388 +0000 UTC m=+142.203573734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.066011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.066569 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.566557814 +0000 UTC m=+142.204515160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.072286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.089618 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.114969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da9e3070-71fe-41f6-8549-90d97f03c16e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8qrfp\" (UID: \"da9e3070-71fe-41f6-8549-90d97f03c16e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.128795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/752a21cf-698e-45b3-91e2-c00b0e82d991-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8lt5\" (UID: \"752a21cf-698e-45b3-91e2-c00b0e82d991\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.151704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c75qp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.159526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86z7\" (UniqueName: \"kubernetes.io/projected/56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a-kube-api-access-v86z7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hrm2g\" (UID: \"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.159924 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.164864 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.167697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.167862 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.667837138 +0000 UTC m=+142.305794484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.168365 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.168874 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.668853064 +0000 UTC m=+142.306810410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.182236 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.182445 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.189870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eb3e5af-901e-42db-b01e-895e2d6c8171-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fgcq7\" (UID: \"6eb3e5af-901e-42db-b01e-895e2d6c8171\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.192544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"oauth-openshift-558db77b4-sbckt\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.208449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncpz\" (UniqueName: \"kubernetes.io/projected/33323546-6929-4c9c-a0a3-44842b9897b4-kube-api-access-xncpz\") pod \"openshift-apiserver-operator-796bbdcf4f-8c8p6\" (UID: \"33323546-6929-4c9c-a0a3-44842b9897b4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.244468 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.245984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvndq\" (UniqueName: \"kubernetes.io/projected/31501ea8-c8ad-4854-bfda-157a49fd0b39-kube-api-access-wvndq\") pod \"machine-config-controller-84d6567774-5j6bc\" (UID: \"31501ea8-c8ad-4854-bfda-157a49fd0b39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.252595 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q4cz\" (UniqueName: \"kubernetes.io/projected/bb325f25-00bb-4519-99d5-94ea7bbcd9d5-kube-api-access-6q4cz\") pod \"control-plane-machine-set-operator-78cbb6b69f-28vp9\" (UID: \"bb325f25-00bb-4519-99d5-94ea7bbcd9d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.269220 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.269798 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.769779509 +0000 UTC m=+142.407736855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.270794 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4nv\" (UniqueName: \"kubernetes.io/projected/af6eef76-87a0-459c-b2eb-61e06ae7386d-kube-api-access-6d4nv\") pod \"router-default-5444994796-pr8gz\" (UID: \"af6eef76-87a0-459c-b2eb-61e06ae7386d\") " pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.273433 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.290481 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.293813 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-254pk"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.308679 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv552\" (UniqueName: \"kubernetes.io/projected/9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad-kube-api-access-cv552\") pod \"machine-config-operator-74547568cd-kbt4b\" (UID: \"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.329961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdw7k\" (UniqueName: \"kubernetes.io/projected/a8f468fe-13d2-4f44-ab3e-fd301aac78ce-kube-api-access-mdw7k\") pod \"etcd-operator-b45778765-nx7kv\" (UID: \"a8f468fe-13d2-4f44-ab3e-fd301aac78ce\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.350111 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tt9l\" (UniqueName: \"kubernetes.io/projected/16c26a8d-0deb-4754-b815-4402e2aa5455-kube-api-access-8tt9l\") pod \"cluster-image-registry-operator-dc59b4c8b-67z4k\" (UID: \"16c26a8d-0deb-4754-b815-4402e2aa5455\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.371261 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.371687 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.871676209 +0000 UTC m=+142.509633545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.391115 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldm9\" (UniqueName: \"kubernetes.io/projected/d6fc09a4-19be-4bdb-b87a-5eafbfc9981c-kube-api-access-wldm9\") pod \"migrator-59844c95c7-r7tdx\" (UID: \"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.392867 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgrk\" (UniqueName: \"kubernetes.io/projected/17af2b06-620b-4126-ac9e-f0de24c9f6bb-kube-api-access-zmgrk\") pod \"kube-storage-version-migrator-operator-b67b599dd-qdlmd\" (UID: \"17af2b06-620b-4126-ac9e-f0de24c9f6bb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.409595 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwj89\" (UniqueName: \"kubernetes.io/projected/928166c7-a17c-4693-9ae5-1c8aa4050176-kube-api-access-bwj89\") pod \"multus-admission-controller-857f4d67dd-vz9mh\" (UID: \"928166c7-a17c-4693-9ae5-1c8aa4050176\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.426004 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.429999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwjk\" (UniqueName: \"kubernetes.io/projected/9b23bdbc-d2d1-4404-8455-4e877764c72d-kube-api-access-9vwjk\") pod \"olm-operator-6b444d44fb-hjlfz\" (UID: \"9b23bdbc-d2d1-4404-8455-4e877764c72d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.430344 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.438232 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.448731 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.451707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpcf\" (UniqueName: \"kubernetes.io/projected/7082a4c2-c998-4e1c-8264-2bafcd96d0c1-kube-api-access-xlpcf\") pod \"service-ca-operator-777779d784-vpgtw\" (UID: \"7082a4c2-c998-4e1c-8264-2bafcd96d0c1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.453428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.468026 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.470504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"marketplace-operator-79b997595-wcmvb\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.471937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.472611 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:47.972578004 +0000 UTC m=+142.610535360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.483226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.490512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.493488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/454fa304-47eb-48d6-9fec-406888874f6f-kube-api-access-9tkv6\") pod \"package-server-manager-789f6589d5-zqpn4\" (UID: \"454fa304-47eb-48d6-9fec-406888874f6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.499273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.507588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.514814 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8lk\" (UniqueName: \"kubernetes.io/projected/6289d893-d357-4aab-a2e9-389a422ebaa5-kube-api-access-cs8lk\") pod \"dns-default-lnxcr\" (UID: \"6289d893-d357-4aab-a2e9-389a422ebaa5\") " pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.515422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.523299 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.529993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"console-f9d7485db-8fgxh\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.532869 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.539695 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.551386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpjf\" (UniqueName: \"kubernetes.io/projected/bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0-kube-api-access-qwpjf\") pod \"ingress-canary-hfk7g\" (UID: \"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0\") " pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.554832 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.570233 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.573878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.574691 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.07465797 +0000 UTC m=+142.712615316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.575791 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.590964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-txtwn"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.599705 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.600364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7j765"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.606267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vzmxm"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.611806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gcf\" (UniqueName: \"kubernetes.io/projected/c71faa34-d1e9-4e10-911a-8cc1ccb436c0-kube-api-access-66gcf\") pod \"csi-hostpathplugin-vljjd\" (UID: \"c71faa34-d1e9-4e10-911a-8cc1ccb436c0\") " pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.639944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.647015 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.654151 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hfk7g" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679261 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.679464 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.179430496 +0000 UTC m=+142.817387842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679795 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.679988 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680071 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680122 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680520 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680574 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680709 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.680872 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681076 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681725 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.681774 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.181758318 +0000 UTC m=+142.819715664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.681928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.682005 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.698427 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jn8dp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.699884 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.755781 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.760412 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.781880 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.783033 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.784427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.785907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-cabundle\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.786036 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.286012111 +0000 UTC m=+142.923969457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.790473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.790697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791859 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.791932 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.792972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793001 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.793097 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.794838 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.294821447 +0000 UTC m=+142.932778783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.796522 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/236f27f9-0389-4143-8014-18eb1f125468-tmpfs\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.797323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06c79303-4409-4d40-8b87-66904d05a635-trusted-ca\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.797579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.800091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2e4b551-3838-4db9-8ee2-363473a40bc4-signing-key\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-profile-collector-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.803609 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06c79303-4409-4d40-8b87-66904d05a635-metrics-tls\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.812106 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236f27f9-0389-4143-8014-18eb1f125468-webhook-cert\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.812132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb029d61-d79f-45a8-88f1-2c190d9315eb-srv-cert\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.814255 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"453741f26b3ed7a14992c9725d66eb2123ad6d2924bc25f9e558bc21015df26f"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.831602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs45g\" (UniqueName: \"kubernetes.io/projected/cb029d61-d79f-45a8-88f1-2c190d9315eb-kube-api-access-xs45g\") pod \"catalog-operator-68c6474976-gtc8b\" (UID: \"cb029d61-d79f-45a8-88f1-2c190d9315eb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.832415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-254pk" event={"ID":"d9f3a679-bd83-4e31-aad4-0bd228e96c33","Type":"ContainerStarted","Data":"482247736a6b9798585a7bfb91e8563590e9e069d111adabf4004414cdb75d24"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.832489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-254pk" event={"ID":"d9f3a679-bd83-4e31-aad4-0bd228e96c33","Type":"ContainerStarted","Data":"841f424f8401c8e324936c2900408b4414e1055b54b6b487f0054fad637340a2"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.833056 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.849622 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.850482 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"7d5198e682adb277aef82e5a6cb369b7c0fef6a5ded9d6edbc28d5907dc5f74f"} Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.850127 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda9e3070_71fe_41f6_8549_90d97f03c16e.slice/crio-0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13 WatchSource:0}: Error finding container 0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13: Status 404 returned error can't find the container with id 0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13 Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.851923 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8acca84e_2800_4a20_b3e8_84e021d1c001.slice/crio-161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c WatchSource:0}: Error finding container 161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c: Status 404 returned error can't find the container with id 161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.853017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"collect-profiles-29496495-brtsv\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.853594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerStarted","Data":"2671a13dece461b0f7ac5d5cf28d322e51625ef00baf6de4ac368b736fd3c301"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.854354 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k"] Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.860290 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" event={"ID":"0d8527eb-86cc-45de-8821-7b80f37422d0","Type":"ContainerStarted","Data":"43496fbf302ed3230717bce41731ca26bacde92ea4fa65f4768c824a9d6d476a"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.861315 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.861408 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.865385 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerStarted","Data":"cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.865443 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerStarted","Data":"442796fe00494142d89b0e1b9d6820cd3ac80019a54bf8a35e0ec68f7d85bbbf"} Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866374 4766 patch_prober.go:28] interesting pod/console-operator-58897d9998-gtfgx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866435 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866451 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podUID="2e83d3d7-f71f-47ab-a085-8d62e6b30f7d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.866482 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.868484 4766 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dgkvz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.868677 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.877965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpdx\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-kube-api-access-rqpdx\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.883398 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.888974 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06c79303-4409-4d40-8b87-66904d05a635-bound-sa-token\") pod \"ingress-operator-5b745b69d9-92b8r\" (UID: \"06c79303-4409-4d40-8b87-66904d05a635\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897451 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.897845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: E0130 16:24:47.898346 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.398313669 +0000 UTC m=+143.036271155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.907080 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.910840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-node-bootstrap-token\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.911899 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdx5s\" (UniqueName: \"kubernetes.io/projected/e2e4b551-3838-4db9-8ee2-363473a40bc4-kube-api-access-wdx5s\") pod \"service-ca-9c57cc56f-n5kg4\" (UID: \"e2e4b551-3838-4db9-8ee2-363473a40bc4\") " pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.913932 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae6eef10-afa3-4bb1-b57a-5a89d305467e-certs\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:47 crc kubenswrapper[4766]: W0130 16:24:47.917502 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c26a8d_0deb_4754_b815_4402e2aa5455.slice/crio-16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c WatchSource:0}: Error finding container 16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c: Status 404 returned error can't find the container with id 16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.938155 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwd6c\" (UniqueName: \"kubernetes.io/projected/236f27f9-0389-4143-8014-18eb1f125468-kube-api-access-pwd6c\") pod \"packageserver-d55dfcdfc-5hqpk\" (UID: \"236f27f9-0389-4143-8014-18eb1f125468\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:47 crc kubenswrapper[4766]: I0130 16:24:47.952998 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:47.977886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gqp\" (UniqueName: \"kubernetes.io/projected/ae6eef10-afa3-4bb1-b57a-5a89d305467e-kube-api-access-x4gqp\") pod \"machine-config-server-92gpq\" (UID: \"ae6eef10-afa3-4bb1-b57a-5a89d305467e\") " pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:47.999441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:47.999863 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.49984397 +0000 UTC m=+143.137801316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.107922 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.108599 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.608575163 +0000 UTC m=+143.246532509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.148889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.161223 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.191309 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.210246 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.210927 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.710904684 +0000 UTC m=+143.348862030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.265348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-92gpq" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.297073 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.311234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.311662 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.811629914 +0000 UTC m=+143.449587260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.394670 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nx7kv"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.396358 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-254pk" podStartSLOduration=123.396324816 podStartE2EDuration="2m3.396324816s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:48.38781461 +0000 UTC m=+143.025771956" watchObservedRunningTime="2026-01-30 16:24:48.396324816 +0000 UTC m=+143.034282162" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.421418 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.421903 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:48.921886066 +0000 UTC m=+143.559843412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.439130 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.441101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.522101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.522560 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.022535874 +0000 UTC m=+143.660493220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: W0130 16:24:48.616631 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31501ea8_c8ad_4854_bfda_157a49fd0b39.slice/crio-ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47 WatchSource:0}: Error finding container ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47: Status 404 returned error can't find the container with id ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.626013 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.626394 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.126378116 +0000 UTC m=+143.764335462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.727324 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.727870 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.227810465 +0000 UTC m=+143.865767811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.808764 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podStartSLOduration=122.808742457 podStartE2EDuration="2m2.808742457s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:48.806669143 +0000 UTC m=+143.444626489" watchObservedRunningTime="2026-01-30 16:24:48.808742457 +0000 UTC m=+143.446699803" Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.828954 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.829963 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.329941471 +0000 UTC m=+143.967898817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.876437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" event={"ID":"a8f468fe-13d2-4f44-ab3e-fd301aac78ce","Type":"ContainerStarted","Data":"da251e530ab2ae213417afd42802bcd7683d136713137e0b510b21cdbfe6eb43"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.881389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" event={"ID":"6eb3e5af-901e-42db-b01e-895e2d6c8171","Type":"ContainerStarted","Data":"4de9e4627339e1fcae6802873a837882e15805acebb31ecd9a71512f2df2f935"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.884453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"6ad5204974828ceb5cbfe7d2872cfeadbc7fd55a349a46eda58bf9243f7f8807"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.887624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerStarted","Data":"ddf6c9f183093e3abd62fdf360fb6093bb986bc490a6c0e7b7f79dd126d78283"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.893049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" event={"ID":"da9e3070-71fe-41f6-8549-90d97f03c16e","Type":"ContainerStarted","Data":"0f5669dd5f65cf1966bdd3e7bcc330eeec62b86b4c3c4705acd1e3306ee4ce13"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.896134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" event={"ID":"16c26a8d-0deb-4754-b815-4402e2aa5455","Type":"ContainerStarted","Data":"a0e7715b8beb895fdad8948f13686a6e4856da0d9638f596d65fb28a29771549"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.896247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" event={"ID":"16c26a8d-0deb-4754-b815-4402e2aa5455","Type":"ContainerStarted","Data":"16711d843ae6874f2a105c7cedac574863068c1e52824fa651da2e5e171e041c"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.897977 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"161f5f9607be84f92f5e317ccd3606999115bafa037366c74c6bc3a23e59209c"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.899785 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"d410fd65fd3929bcfa340dde5e3c83faefd8e517018e4cc42fb98f267ae5457b"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.902963 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerDied","Data":"1a422a313bd56f96b0268135869b328990e93c424eeb46ad57ae692d569fb0de"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.902858 4766 generic.go:334] "Generic (PLEG): container finished" podID="71148f4c-0b84-45c4-911c-0ec4b06cf710" containerID="1a422a313bd56f96b0268135869b328990e93c424eeb46ad57ae692d569fb0de" exitCode=0 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.909499 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b"] Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.925435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"08c5b468189bfe5f87ad1830d0d5545ac942bc2100fb13af56ab34a46a906741"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.928541 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" event={"ID":"752a21cf-698e-45b3-91e2-c00b0e82d991","Type":"ContainerStarted","Data":"6905d5eb8ef040a03594ce180dad5fcb64bf67647935e14512561f3c56d254a1"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.930851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:48 crc kubenswrapper[4766]: E0130 16:24:48.931620 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.431586485 +0000 UTC m=+144.069543821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.932087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerStarted","Data":"a6184cf8b16957ad6df32ef60f66d31e49cd6a8b7088d60d3d7abeb822aa03d8"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.934633 4766 generic.go:334] "Generic (PLEG): container finished" podID="c1191290-07ee-40c4-85e8-59545986d7db" containerID="daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef" exitCode=0 Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.934737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerDied","Data":"daec32ddeb71cafc72ea9f18114392a006f36902490cf83d409c0b69bb0480ef"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.939428 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"ebd7f08de6e087a24d4fbb3ad0ed25fca08854c043913e0bb81fa83fbb7dae47"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.948881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"ecbc5022a09a2680184de1da4ce4b20a3d1d35bd4d0e5b84f23bd6c7f61891fc"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.951534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pr8gz" event={"ID":"af6eef76-87a0-459c-b2eb-61e06ae7386d","Type":"ContainerStarted","Data":"c072450d73a30397006517ca4a1710297da525d142769091fd9260d5e9d902a4"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.957153 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" event={"ID":"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a","Type":"ContainerStarted","Data":"d5a97bf1b7443a01476607e93b1a10db15b399d5c4c579f36ab578a3f39e7592"} Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.966345 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:48 crc kubenswrapper[4766]: I0130 16:24:48.966452 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.016480 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.036765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.038187 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.53815302 +0000 UTC m=+144.176110366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.134248 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podStartSLOduration=123.134221726 podStartE2EDuration="2m3.134221726s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.128661698 +0000 UTC m=+143.766619044" watchObservedRunningTime="2026-01-30 16:24:49.134221726 +0000 UTC m=+143.772179062" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.137923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.138462 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.638422467 +0000 UTC m=+144.276379813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.150734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.151303 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.65128924 +0000 UTC m=+144.289246586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: W0130 16:24:49.194626 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9669ae_a5fc_4e59_b2b7_3ae1ebf6f3ad.slice/crio-5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756 WatchSource:0}: Error finding container 5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756: Status 404 returned error can't find the container with id 5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756 Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.256501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.257014 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.75696855 +0000 UTC m=+144.394925896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.257284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.257641 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.757626739 +0000 UTC m=+144.395584085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.367700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.369443 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.869407392 +0000 UTC m=+144.507364748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.369647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.375888 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.874478536 +0000 UTC m=+144.512435882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.465995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" podStartSLOduration=124.46596399 podStartE2EDuration="2m4.46596399s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.409330924 +0000 UTC m=+144.047288310" watchObservedRunningTime="2026-01-30 16:24:49.46596399 +0000 UTC m=+144.103921336" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.466952 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.471819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.472403 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:49.972379291 +0000 UTC m=+144.610336637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.482038 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.495153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.575927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.577498 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.077477407 +0000 UTC m=+144.715434743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.680685 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.681269 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.181243447 +0000 UTC m=+144.819200793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.791767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.792636 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.29261333 +0000 UTC m=+144.930570676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.889429 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-67z4k" podStartSLOduration=123.889353943 podStartE2EDuration="2m3.889353943s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.820221354 +0000 UTC m=+144.458178710" watchObservedRunningTime="2026-01-30 16:24:49.889353943 +0000 UTC m=+144.527311299" Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.892872 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.911442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b"] Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.913222 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.413187587 +0000 UTC m=+145.051144933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.920103 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.933514 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lnxcr"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.956435 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.977113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" event={"ID":"0d8527eb-86cc-45de-8821-7b80f37422d0","Type":"ContainerStarted","Data":"5bf2aaca3ffc9a6f0b3865148f1db3fe9ff5d8edbd775010a4143273d6d7148b"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.977640 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd"] Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.978151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" event={"ID":"33323546-6929-4c9c-a0a3-44842b9897b4","Type":"ContainerStarted","Data":"3e3e5f34546852f9f865ce293d49cf74902831ac94e7372ebd8fbf9c35b342d2"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.979058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" event={"ID":"da9e3070-71fe-41f6-8549-90d97f03c16e","Type":"ContainerStarted","Data":"3119f9563fe5793394a0aa2da3e100fe6d9a4bd23dbeebbc51069eec3f569033"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.986467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pr8gz" event={"ID":"af6eef76-87a0-459c-b2eb-61e06ae7386d","Type":"ContainerStarted","Data":"8df16e695c546b49c3fb9e0f2f6b9286cf04964bc89bd976fd8f255d3b0ffb9c"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.991689 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" event={"ID":"7082a4c2-c998-4e1c-8264-2bafcd96d0c1","Type":"ContainerStarted","Data":"90ed4770024b3a935ff695b330dd2070d63f9090327d3c9c82f7ac1923e50390"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.992860 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-92gpq" event={"ID":"ae6eef10-afa3-4bb1-b57a-5a89d305467e","Type":"ContainerStarted","Data":"296b99530e5aec0667f1585adc7769f2e22feb2beeb616aacb60bfdf325d5645"} Jan 30 16:24:49 crc kubenswrapper[4766]: I0130 16:24:49.995290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:49 crc kubenswrapper[4766]: E0130 16:24:49.996338 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.496322018 +0000 UTC m=+145.134279364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.013159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"5e02b706cb8fb62adb14df73fad5c37b79cfa14658befa81ca68cc58318bd756"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.016696 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hfk7g"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.031641 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.035837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"770d6e0a58032340a0944dbb22c0ab598c6a53cda36eadedcc32b80f603d6e08"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.054493 4766 generic.go:334] "Generic (PLEG): container finished" podID="0fd41a92-ef77-4a02-bd2b-089d2edb3cf4" containerID="bbdd72910bf69cedc9b201ca08b0d2cf32920301a60836f6830d189b4fae9f6c" exitCode=0 Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.056131 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerDied","Data":"bbdd72910bf69cedc9b201ca08b0d2cf32920301a60836f6830d189b4fae9f6c"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.072772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"f1bcfef40c047ee2d486510556be4c02c15197feb65c844e1b250852a3541990"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.085071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" event={"ID":"56e829ed-8cb2-48e8-bc6e-b5a6ec346d4a","Type":"ContainerStarted","Data":"599ebdbf5ebee78e6d458684bbc734c349630f6355f0b75ec6a80fa5519e47a0"} Jan 30 16:24:50 crc kubenswrapper[4766]: W0130 16:24:50.093486 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6289d893_d357_4aab_a2e9_389a422ebaa5.slice/crio-229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff WatchSource:0}: Error finding container 229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff: Status 404 returned error can't find the container with id 229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.096732 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.097473 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.597444078 +0000 UTC m=+145.235401424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.097934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.098706 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.598682141 +0000 UTC m=+145.236639637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.145536 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"314df87171b22bdeda0433f572a5232af51a1d1b4dcf4b0bef93c38a9b32f0b0"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.162078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.179419 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-txtwn" podStartSLOduration=124.179394789 podStartE2EDuration="2m4.179394789s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.165745975 +0000 UTC m=+144.803703321" watchObservedRunningTime="2026-01-30 16:24:50.179394789 +0000 UTC m=+144.817352135" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.196424 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n5kg4"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.199001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"3250aa1b6948fb4c1d00424aab5e2b385f337b6ce92155e423c37dd416a4e57d"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.199398 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.200596 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.700570291 +0000 UTC m=+145.338527637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.203103 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vz9mh"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.228288 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.228375 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.234321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vljjd"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.234944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r"] Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.237585 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hrm2g" podStartSLOduration=124.237557785 podStartE2EDuration="2m4.237557785s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.222425253 +0000 UTC m=+144.860382609" watchObservedRunningTime="2026-01-30 16:24:50.237557785 +0000 UTC m=+144.875515131" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.258444 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pr8gz" podStartSLOduration=124.258422911 podStartE2EDuration="2m4.258422911s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.256976652 +0000 UTC m=+144.894934008" watchObservedRunningTime="2026-01-30 16:24:50.258422911 +0000 UTC m=+144.896380247" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.281927 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"256047d4257fc7a44d1ece1f87cbb5c8d5501e1d7fbc18af85fcf19357650f6b"} Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.296425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" podStartSLOduration=124.29640038 podStartE2EDuration="2m4.29640038s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.295698782 +0000 UTC m=+144.933656128" watchObservedRunningTime="2026-01-30 16:24:50.29640038 +0000 UTC m=+144.934357726" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.300590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.301563 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.801544627 +0000 UTC m=+145.439501973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.302056 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.302094 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.340610 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8qrfp" podStartSLOduration=124.340588446 podStartE2EDuration="2m4.340588446s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.335892492 +0000 UTC m=+144.973849848" watchObservedRunningTime="2026-01-30 16:24:50.340588446 +0000 UTC m=+144.978545782" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.402967 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.406116 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:50.906085179 +0000 UTC m=+145.544042525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.410457 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" podStartSLOduration=125.410429334 podStartE2EDuration="2m5.410429334s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.358280697 +0000 UTC m=+144.996238043" watchObservedRunningTime="2026-01-30 16:24:50.410429334 +0000 UTC m=+145.048386680" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.451771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.452405 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.452494 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.509595 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.511690 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.011672597 +0000 UTC m=+145.649629943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.612360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.612982 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.112961271 +0000 UTC m=+145.750918617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.714885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.715698 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.215675264 +0000 UTC m=+145.853632600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.816418 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.817077 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.31705613 +0000 UTC m=+145.955013476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:50 crc kubenswrapper[4766]: I0130 16:24:50.921704 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:50 crc kubenswrapper[4766]: E0130 16:24:50.922247 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.422223668 +0000 UTC m=+146.060181014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.022818 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.023166 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.523133073 +0000 UTC m=+146.161090419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.023383 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.023760 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.523748329 +0000 UTC m=+146.161705675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.133058 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.133325 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.633296363 +0000 UTC m=+146.271253709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.133633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.134087 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.634074614 +0000 UTC m=+146.272031970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.235256 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.235526 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.735485662 +0000 UTC m=+146.373442998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.235672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.236367 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.736357565 +0000 UTC m=+146.374314911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.338992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.339741 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.839718734 +0000 UTC m=+146.477676080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.369999 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" event={"ID":"8acca84e-2800-4a20-b3e8-84e021d1c001","Type":"ContainerStarted","Data":"7405ae37c26b0581853db3ccac8ce6dd159a12ff270e5eaa1ff4742c800c28ae"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.379585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" event={"ID":"236f27f9-0389-4143-8014-18eb1f125468","Type":"ContainerStarted","Data":"6363198d2d71917d8b884b64446a2ebb6a1046c1c91849449bbbdea23eee6260"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.389793 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerStarted","Data":"b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.389860 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerStarted","Data":"7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.394034 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" event={"ID":"33323546-6929-4c9c-a0a3-44842b9897b4","Type":"ContainerStarted","Data":"e1c31ad8125853f8ec6630ad7159cee1cf9b16658bbe92eca33d530b84460071"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.414791 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jn8dp" podStartSLOduration=125.414770331 podStartE2EDuration="2m5.414770331s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.413050815 +0000 UTC m=+146.051008171" watchObservedRunningTime="2026-01-30 16:24:51.414770331 +0000 UTC m=+146.052727677" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.437104 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" event={"ID":"71148f4c-0b84-45c4-911c-0ec4b06cf710","Type":"ContainerStarted","Data":"cdab0ea604c4498b9ce6f2b77f1393d36bdca6102e490574ce37c01f5b6bc92e"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.440696 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.441951 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:51.941931713 +0000 UTC m=+146.579889059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.462600 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:51 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:51 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:51 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.463126 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.472233 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8c8p6" podStartSLOduration=125.472219109 podStartE2EDuration="2m5.472219109s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.469921137 +0000 UTC m=+146.107878473" watchObservedRunningTime="2026-01-30 16:24:51.472219109 +0000 UTC m=+146.110176455" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.483125 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6ndwq" event={"ID":"2f7dd292-e5ea-4fe6-b6ec-da47748f1fc3","Type":"ContainerStarted","Data":"268d7193ac5bf2744bf25326aabc4c15019a681734b5d92fd842657e4918c259"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.503402 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" podStartSLOduration=126.503376367 podStartE2EDuration="2m6.503376367s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.502763361 +0000 UTC m=+146.140720707" watchObservedRunningTime="2026-01-30 16:24:51.503376367 +0000 UTC m=+146.141333713" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.525922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"229b98c949c5936978b80a3f9feba18a7c1ba1e83d267ace9e8c25a3b7ad85ff"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.534500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" event={"ID":"9b23bdbc-d2d1-4404-8455-4e877764c72d","Type":"ContainerStarted","Data":"447d9f52c39dc0821d8ea59a6af5c7fcbf332a8d3ca17855028b0af3d2557b54"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.555059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.556419 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.056394148 +0000 UTC m=+146.694351494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.577680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" event={"ID":"6eb3e5af-901e-42db-b01e-895e2d6c8171","Type":"ContainerStarted","Data":"0a0f824d256d03cc1a540cba346b16459a55c3f2556c7bd1cc3b5a8f60e24c23"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.590405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"32f740ab70487b2548cac5ac73175d1e67a39887c63387e05090860fbc3167ea"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.627705 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" event={"ID":"a8f468fe-13d2-4f44-ab3e-fd301aac78ce","Type":"ContainerStarted","Data":"96b1d76ae7d550f294ede95c3059a877b6f0998f8aacd8265f3707197ee543a9"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.659910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hfk7g" event={"ID":"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0","Type":"ContainerStarted","Data":"f4b622016ff6c0c01945c575adc8b50ab5bd534d066466f2f142a45da3704375"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.663201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.665028 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.165015627 +0000 UTC m=+146.802972963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.673390 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"537c55ab56b69ecb980d12d859877fb379228ff1661c3331dce60fb6e6cfdbb7"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.673450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" event={"ID":"31501ea8-c8ad-4854-bfda-157a49fd0b39","Type":"ContainerStarted","Data":"759b72404c1cef5e7791c2725e441b5d4c1e8d16182caaa05112a21632b675ed"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.714103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.715315 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.717799 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nx7kv" podStartSLOduration=125.717776651 podStartE2EDuration="2m5.717776651s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.713916448 +0000 UTC m=+146.351873794" watchObservedRunningTime="2026-01-30 16:24:51.717776651 +0000 UTC m=+146.355733997" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.724832 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fgcq7" podStartSLOduration=125.724807998 podStartE2EDuration="2m5.724807998s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.644492092 +0000 UTC m=+146.282449438" watchObservedRunningTime="2026-01-30 16:24:51.724807998 +0000 UTC m=+146.362765364" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.726682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"2139692494ba33eef2db868c7d67b746eb934636e0e538b12adf597842124180"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.745425 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" event={"ID":"7082a4c2-c998-4e1c-8264-2bafcd96d0c1","Type":"ContainerStarted","Data":"6a3a2bf293b49d0429263f964f58090e6b3564f1ffd0c8c8241cc42e8a8bb9c1"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.760254 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wcmvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.760641 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.761674 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-92gpq" event={"ID":"ae6eef10-afa3-4bb1-b57a-5a89d305467e","Type":"ContainerStarted","Data":"e76a8866ea3ade697977a6e4499ca9be59e4b5cd0e3c08aa551cab86750a1d91"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.763690 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.764997 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.264976867 +0000 UTC m=+146.902934213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.779074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5j6bc" podStartSLOduration=125.779045271 podStartE2EDuration="2m5.779045271s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.777249093 +0000 UTC m=+146.415206439" watchObservedRunningTime="2026-01-30 16:24:51.779045271 +0000 UTC m=+146.417002627" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.780808 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"177cc426e1ddcfe3423fb41da4b3eb7eb60b8c287e3562a5628e8b080ee78199"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.814911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" event={"ID":"3dc11d4d-16d8-43a2-9648-e0b833e8824a","Type":"ContainerStarted","Data":"96a2531ba35b8676aa0de4f1f2099f9a58a9bea620128dc11f663e5b4f181069"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.844553 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" event={"ID":"cb029d61-d79f-45a8-88f1-2c190d9315eb","Type":"ContainerStarted","Data":"d38548fd3dcb73080cdffd7120a608b12cb96d15aecfe9114f6b60664b38a178"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.845806 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.857490 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.857919 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.861071 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.861430 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.863053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" event={"ID":"752a21cf-698e-45b3-91e2-c00b0e82d991","Type":"ContainerStarted","Data":"752368c44f9235ef926b7526e56ccb67ecea79042bf52005a099da0ece3d6549"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.865612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.880570 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.38052259 +0000 UTC m=+147.018479936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.881089 4766 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zps75 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.881643 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" podUID="71148f4c-0b84-45c4-911c-0ec4b06cf710" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.885459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" event={"ID":"e2e4b551-3838-4db9-8ee2-363473a40bc4","Type":"ContainerStarted","Data":"833885bbce589d83ccfd18ff99e12f4e4514dfa88e6fff66c42e00586df2a781"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.892786 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podStartSLOduration=125.892757385 podStartE2EDuration="2m5.892757385s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.836855938 +0000 UTC m=+146.474813284" watchObservedRunningTime="2026-01-30 16:24:51.892757385 +0000 UTC m=+146.530714731" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.892954 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-92gpq" podStartSLOduration=7.89294659 podStartE2EDuration="7.89294659s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.887191677 +0000 UTC m=+146.525149033" watchObservedRunningTime="2026-01-30 16:24:51.89294659 +0000 UTC m=+146.530903936" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.906065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"aa5086e7c2f8951ea0255063dcbd2e4c2bb466af0545c8d1936c4a340c56d773"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.924378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.924734 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943379 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c75qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943463 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podUID="c1191290-07ee-40c4-85e8-59545986d7db" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.943895 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" event={"ID":"587fc124-b506-4535-b8d2-1d0f6c91cfb9","Type":"ContainerStarted","Data":"8d5082874d25b8386799f92133190a593df112474c2ba13a6f9daf39110867e5"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.964127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerStarted","Data":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.965443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.966470 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:51 crc kubenswrapper[4766]: E0130 16:24:51.966909 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.466869337 +0000 UTC m=+147.104826683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.980168 4766 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sbckt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.980275 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.984332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" event={"ID":"17af2b06-620b-4126-ac9e-f0de24c9f6bb","Type":"ContainerStarted","Data":"37cfa1f14a9f6134b6908adadd3a6b6032df5da50b15b1baae295d503e0c6c49"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.984383 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" event={"ID":"17af2b06-620b-4126-ac9e-f0de24c9f6bb","Type":"ContainerStarted","Data":"a44c5d4dbc8a9114803afe2accd4cdb11467fb50aa1b851d129012c5a2fd66dc"} Jan 30 16:24:51 crc kubenswrapper[4766]: I0130 16:24:51.990382 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vzmxm" podStartSLOduration=125.990357572 podStartE2EDuration="2m5.990357572s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.989849258 +0000 UTC m=+146.627806604" watchObservedRunningTime="2026-01-30 16:24:51.990357572 +0000 UTC m=+146.628314918" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.000829 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vpgtw" podStartSLOduration=126.000779719 podStartE2EDuration="2m6.000779719s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:51.928084825 +0000 UTC m=+146.566042171" watchObservedRunningTime="2026-01-30 16:24:52.000779719 +0000 UTC m=+146.638737065" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.001687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"714bf4a6a15b63c5073fc82efa378c75cb075d7b780c72784409cbfee15e41e6"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.001770 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"c86dc66ce421f80b9b44b1f2caa54a4f0c98553aefc74a62e2b7a17a2b335a61"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.002421 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.012615 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"c2ba8cdc73f709c9b246e9f10819363fed1d633aa5b27834559c875fe325adad"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.019231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" event={"ID":"d6fc09a4-19be-4bdb-b87a-5eafbfc9981c","Type":"ContainerStarted","Data":"265b6cf499f4675014bad1f21fd5af01055766ea421abe868847ffcb21f2197d"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.023631 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerStarted","Data":"49a469bfbf32d87fdc9772eb7cb8b7a2cfda12f2178ff6d5d4530255ca2db5f7"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.029865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" event={"ID":"bb325f25-00bb-4519-99d5-94ea7bbcd9d5","Type":"ContainerStarted","Data":"b1a4490160d7f5a4f6fd598ea933ace3e42e3c496fd21c4ed95898afd6564752"} Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.083948 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.088540 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.588515863 +0000 UTC m=+147.226473399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.107655 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podStartSLOduration=126.107625151 podStartE2EDuration="2m6.107625151s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.057145708 +0000 UTC m=+146.695103054" watchObservedRunningTime="2026-01-30 16:24:52.107625151 +0000 UTC m=+146.745582507" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.108908 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8lt5" podStartSLOduration=126.108899675 podStartE2EDuration="2m6.108899675s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.107254142 +0000 UTC m=+146.745211498" watchObservedRunningTime="2026-01-30 16:24:52.108899675 +0000 UTC m=+146.746857021" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.189960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.190421 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.690384803 +0000 UTC m=+147.328342149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.190777 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.191281 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.691272206 +0000 UTC m=+147.329229552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.208057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podStartSLOduration=126.208040493 podStartE2EDuration="2m6.208040493s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.174527511 +0000 UTC m=+146.812484867" watchObservedRunningTime="2026-01-30 16:24:52.208040493 +0000 UTC m=+146.845997839" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.209052 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2h92f" podStartSLOduration=127.209047309 podStartE2EDuration="2m7.209047309s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.206860021 +0000 UTC m=+146.844817367" watchObservedRunningTime="2026-01-30 16:24:52.209047309 +0000 UTC m=+146.847004655" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.238249 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" podStartSLOduration=126.238223286 podStartE2EDuration="2m6.238223286s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.23653822 +0000 UTC m=+146.874495566" watchObservedRunningTime="2026-01-30 16:24:52.238223286 +0000 UTC m=+146.876180632" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.273344 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" podStartSLOduration=126.273323179 podStartE2EDuration="2m6.273323179s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.271526432 +0000 UTC m=+146.909483788" watchObservedRunningTime="2026-01-30 16:24:52.273323179 +0000 UTC m=+146.911280525" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.294497 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.295501 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.795468278 +0000 UTC m=+147.433425624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.301569 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" podStartSLOduration=126.30155123 podStartE2EDuration="2m6.30155123s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.300507402 +0000 UTC m=+146.938464738" watchObservedRunningTime="2026-01-30 16:24:52.30155123 +0000 UTC m=+146.939508576" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.339564 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podStartSLOduration=126.339544151 podStartE2EDuration="2m6.339544151s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.337439505 +0000 UTC m=+146.975396851" watchObservedRunningTime="2026-01-30 16:24:52.339544151 +0000 UTC m=+146.977501507" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.396598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.397071 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.89704943 +0000 UTC m=+147.535006776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.399000 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qdlmd" podStartSLOduration=126.398973181 podStartE2EDuration="2m6.398973181s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.3970375 +0000 UTC m=+147.034994856" watchObservedRunningTime="2026-01-30 16:24:52.398973181 +0000 UTC m=+147.036930527" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.441779 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r7tdx" podStartSLOduration=126.441761549 podStartE2EDuration="2m6.441761549s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.440324572 +0000 UTC m=+147.078281918" watchObservedRunningTime="2026-01-30 16:24:52.441761549 +0000 UTC m=+147.079718895" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.462620 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:52 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:52 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:52 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.463002 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.474487 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8fgxh" podStartSLOduration=127.47446427 podStartE2EDuration="2m7.47446427s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:52.472561049 +0000 UTC m=+147.110518405" watchObservedRunningTime="2026-01-30 16:24:52.47446427 +0000 UTC m=+147.112421626" Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.498606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.499026 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:52.998992942 +0000 UTC m=+147.636950288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.600274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.600723 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.100707708 +0000 UTC m=+147.738665054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.701852 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.702075 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.202034403 +0000 UTC m=+147.839991759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.702190 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.702698 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.20268199 +0000 UTC m=+147.840639516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.803826 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.804061 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.304024517 +0000 UTC m=+147.941981863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.804620 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.805049 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.305041373 +0000 UTC m=+147.942998719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.906277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.907392 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.407347625 +0000 UTC m=+148.045304971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.909067 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:52 crc kubenswrapper[4766]: E0130 16:24:52.909964 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.409738328 +0000 UTC m=+148.047695674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.965230 4766 csr.go:261] certificate signing request csr-nw8v6 is approved, waiting to be issued Jan 30 16:24:52 crc kubenswrapper[4766]: I0130 16:24:52.976385 4766 csr.go:257] certificate signing request csr-nw8v6 is issued Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.010783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.011551 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.511511666 +0000 UTC m=+148.149469072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.035158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"cfa6e2db336fc6785b9b181a509714caa7a29db322047e22a01d115b81c8c5a7"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.035555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" event={"ID":"928166c7-a17c-4693-9ae5-1c8aa4050176","Type":"ContainerStarted","Data":"350d5ffed7059997ad2a9f5fddcc10d2543a396b97f50913871049601f3e9f60"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.038243 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" event={"ID":"e2e4b551-3838-4db9-8ee2-363473a40bc4","Type":"ContainerStarted","Data":"88da5e091813b2ae889a5abce8a5c7b378f6d55226ab55628cb5c054037bd528"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.040601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"ea0a8858d87a0af0254064800731e7b85e5dd3f77c82c9e17c1814222ab6f4f3"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.040882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lnxcr" event={"ID":"6289d893-d357-4aab-a2e9-389a422ebaa5","Type":"ContainerStarted","Data":"6dba05c6cf302151559a448b5a7144550979b9dc2b3cfd9f9bcc6c2eddc24f47"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.041537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.042771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" event={"ID":"9b23bdbc-d2d1-4404-8455-4e877764c72d","Type":"ContainerStarted","Data":"d543a873765dd07695f0a5b7704044c70c0fc8424ab6e91c411205852b97f8c7"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.043253 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.044852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" event={"ID":"236f27f9-0389-4143-8014-18eb1f125468","Type":"ContainerStarted","Data":"26780db5818c6efb42b27114dddc4051db1e2aa057ae3cedc31ae8acdedbb769"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046090 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046348 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hjlfz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.046539 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podUID="9b23bdbc-d2d1-4404-8455-4e877764c72d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.047001 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hqpk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.047118 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podUID="236f27f9-0389-4143-8014-18eb1f125468" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.048747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kbt4b" event={"ID":"9f9669ae-a5fc-4e59-b2b7-3ae1ebf6f3ad","Type":"ContainerStarted","Data":"97c6b040fd0559ab1bb40db0ab74cbfba27cdb7e1fb086235d129cea7d0f3c53"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.050972 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"4e5f9778b56d5a6e0d4e84609cfb82ec6d5c1ba07cd9e9a5565f9deae58dae67"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.051098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" event={"ID":"06c79303-4409-4d40-8b87-66904d05a635","Type":"ContainerStarted","Data":"e3c10f7bd38cf6c82e0a17a24ecc1f02c302aac2c3f80877ec0440f531e63771"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.054094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" event={"ID":"0fd41a92-ef77-4a02-bd2b-089d2edb3cf4","Type":"ContainerStarted","Data":"5d710bfa89da9c2138c6091b48fa73bf9c82d796128313ddb96a4381746d4576"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.054273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.055624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerStarted","Data":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.057146 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" event={"ID":"cb029d61-d79f-45a8-88f1-2c190d9315eb","Type":"ContainerStarted","Data":"ae62df9a2129ddcb0f7307054f73875d70d72f12be1382dfc87b4aa071371d4d"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.058657 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.058834 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.060742 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" event={"ID":"454fa304-47eb-48d6-9fec-406888874f6f","Type":"ContainerStarted","Data":"89c6881619dca0f47829132ee99216bb505bf40322cc160d3d5a94cf0714e639"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.062955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hfk7g" event={"ID":"bcea73c8-ba18-43d4-a84b-f4ac5f7c43b0","Type":"ContainerStarted","Data":"25cee3581538a46e288fe32e9a96d46c62607cc8ab2c44d06f6049b561af07d8"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.065506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-28vp9" event={"ID":"bb325f25-00bb-4519-99d5-94ea7bbcd9d5","Type":"ContainerStarted","Data":"8b6ceb44e605e8d65c22ee47f1d8a63f9e04beef4021b510f174928a0704cb71"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.074896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" event={"ID":"c1191290-07ee-40c4-85e8-59545986d7db","Type":"ContainerStarted","Data":"4191ecbd08a95f39ecad007556146b8b4179e9d2053e3e331735cfb272c9d87a"} Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.078312 4766 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wcmvb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.078403 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.086857 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vz9mh" podStartSLOduration=127.08683558 podStartE2EDuration="2m7.08683558s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.085566265 +0000 UTC m=+147.723523621" watchObservedRunningTime="2026-01-30 16:24:53.08683558 +0000 UTC m=+147.724792926" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.112812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.117808 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.118291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.118756 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.119050 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.137677 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.140428 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.640385974 +0000 UTC m=+148.278343340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.144408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.150089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.151557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.158921 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.221923 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.222542 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.722511689 +0000 UTC m=+148.360469045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.256753 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.261534 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podStartSLOduration=127.261509866 podStartE2EDuration="2m7.261509866s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.167163326 +0000 UTC m=+147.805120672" watchObservedRunningTime="2026-01-30 16:24:53.261509866 +0000 UTC m=+147.899467212" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.279051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.324280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.325021 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.825008955 +0000 UTC m=+148.462966301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.355614 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n5kg4" podStartSLOduration=127.355583339 podStartE2EDuration="2m7.355583339s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.264367922 +0000 UTC m=+147.902325268" watchObservedRunningTime="2026-01-30 16:24:53.355583339 +0000 UTC m=+147.993540685" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.358537 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podStartSLOduration=127.358520237 podStartE2EDuration="2m7.358520237s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.331548579 +0000 UTC m=+147.969505935" watchObservedRunningTime="2026-01-30 16:24:53.358520237 +0000 UTC m=+147.996477583" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.396103 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" podStartSLOduration=128.396081956 podStartE2EDuration="2m8.396081956s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.392949313 +0000 UTC m=+148.030906679" watchObservedRunningTime="2026-01-30 16:24:53.396081956 +0000 UTC m=+148.034039302" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.416743 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-92b8r" podStartSLOduration=127.416720414 podStartE2EDuration="2m7.416720414s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.415712528 +0000 UTC m=+148.053669884" watchObservedRunningTime="2026-01-30 16:24:53.416720414 +0000 UTC m=+148.054677760" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.442264 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.442718 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:53.942691016 +0000 UTC m=+148.580648362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.462161 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:53 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:53 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:53 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.467407 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.526083 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hfk7g" podStartSLOduration=9.526055613 podStartE2EDuration="9.526055613s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.485588607 +0000 UTC m=+148.123545953" watchObservedRunningTime="2026-01-30 16:24:53.526055613 +0000 UTC m=+148.164012969" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.526723 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lnxcr" podStartSLOduration=9.526715601 podStartE2EDuration="9.526715601s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:53.518988265 +0000 UTC m=+148.156945611" watchObservedRunningTime="2026-01-30 16:24:53.526715601 +0000 UTC m=+148.164672957" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.545306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.545806 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.045784458 +0000 UTC m=+148.683741804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.646532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.646953 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.146930369 +0000 UTC m=+148.784887705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.748156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.748939 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.248922322 +0000 UTC m=+148.886879668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.850170 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.850706 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.350644137 +0000 UTC m=+148.988601483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.873865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:24:53 crc kubenswrapper[4766]: E0130 16:24:53.963857 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.463834239 +0000 UTC m=+149.101791585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.953252 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.979629 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 16:19:52 +0000 UTC, rotation deadline is 2026-11-08 09:41:44.580408886 +0000 UTC Jan 30 16:24:53 crc kubenswrapper[4766]: I0130 16:24:53.979681 4766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6761h16m50.600731636s for next certificate rotation Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.071014 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.071454 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.571429361 +0000 UTC m=+149.209386707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.099593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"b5ad1c002728a64dacfb9c106729503b83c822bc791e12402de5e6f16e1b6f3b"} Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104534 4766 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hqpk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104579 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" podUID="236f27f9-0389-4143-8014-18eb1f125468" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104731 4766 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gtc8b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104853 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" podUID="cb029d61-d79f-45a8-88f1-2c190d9315eb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104541 4766 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hjlfz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.104930 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" podUID="9b23bdbc-d2d1-4404-8455-4e877764c72d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.111232 4766 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7j765 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.111260 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" podUID="0fd41a92-ef77-4a02-bd2b-089d2edb3cf4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.178291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.186013 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.685987708 +0000 UTC m=+149.323945044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.290609 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.291158 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.791131504 +0000 UTC m=+149.429088850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.392335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.392774 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.892756469 +0000 UTC m=+149.530713815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.454433 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:54 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:54 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:54 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.454524 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.493780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.494078 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.994033212 +0000 UTC m=+149.631990568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.494161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.494532 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:54.994513855 +0000 UTC m=+149.632471391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.595769 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.596209 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.096164549 +0000 UTC m=+149.734121895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.697873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.698350 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.198330547 +0000 UTC m=+149.836287893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.799107 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.799370 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.299338474 +0000 UTC m=+149.937295820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.799433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.799823 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.299810776 +0000 UTC m=+149.937768122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.901425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.901707 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.401670606 +0000 UTC m=+150.039627942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:54 crc kubenswrapper[4766]: I0130 16:24:54.902037 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:54 crc kubenswrapper[4766]: E0130 16:24:54.902551 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.402531709 +0000 UTC m=+150.040489055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.004072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.004314 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.504272565 +0000 UTC m=+150.142229911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.004393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.004836 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.50482352 +0000 UTC m=+150.142780906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.105231 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.105581 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.605560869 +0000 UTC m=+150.243518215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.108584 4766 generic.go:334] "Generic (PLEG): container finished" podID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerID="b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe" exitCode=0 Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.108681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerDied","Data":"b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.110790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"194256d66db56e5b4d3e2d08ae707c15cdf6e315a894a7ee01f7b04d4521ef91"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.110838 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a7ec67b2024a86549ccbc71deac794f16e478880d65c368f45714b607f3b83dc"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.111091 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.113063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"96265d74d9d20fbed617e2ab638aac389508a19e5b03e0571ee0116167a70b6e"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.113138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ce2eaadfdb49ebc39c8112e3f604adfbf265faa0fb830caf045bb1984b8f8d0"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.117218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b9a66a51d927177e552352c451fd6c3e254770cf602e2aa83fee13aebbcb9dde"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.117281 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0f13921c85e76ff9fe555cb96321307ccfa2342722c69c6900286d012f7ef9cf"} Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.207058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.217857 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.717826816 +0000 UTC m=+150.355784162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.320018 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.320277 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.820239501 +0000 UTC m=+150.458196847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.320344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.321023 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.821015781 +0000 UTC m=+150.458973127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.422275 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.422533 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.92249538 +0000 UTC m=+150.560452726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.422833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.423240 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:55.92322356 +0000 UTC m=+150.561180906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.456342 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:55 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:55 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:55 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.456432 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.523998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.524214 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.024157505 +0000 UTC m=+150.662114861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.524390 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.524734 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.02472103 +0000 UTC m=+150.662678376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.547263 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gtfgx" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.625379 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.625528 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.125506171 +0000 UTC m=+150.763463517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.625796 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.626118 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.126109677 +0000 UTC m=+150.764067023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.727327 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.727642 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.227601326 +0000 UTC m=+150.865558672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.728530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.729017 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.229003044 +0000 UTC m=+150.866960390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.747777 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.748623 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.753053 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.753246 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.766555 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.795076 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hqpk" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.829632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.829785 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.329768405 +0000 UTC m=+150.967725751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.858273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.931693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:55 crc kubenswrapper[4766]: E0130 16:24:55.936095 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.436075443 +0000 UTC m=+151.074032789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936491 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.936570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:55 crc kubenswrapper[4766]: I0130 16:24:55.975135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.000607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7j765" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.037605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.037827 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.537790258 +0000 UTC m=+151.175747604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.038126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.038618 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.538598709 +0000 UTC m=+151.176556055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.066837 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.136301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"fe5386b041a5b01f5524eff7402381a52dad56a0345e06ea8e1f78b5d2454107"} Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.138906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.139463 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.639425741 +0000 UTC m=+151.277383077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.240957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.242693 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.742671658 +0000 UTC m=+151.380629004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.343711 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.345228 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.845128044 +0000 UTC m=+151.483085390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.379687 4766 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.394511 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.402981 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.404049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.406597 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.447901 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.448366 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:56.94835073 +0000 UTC m=+151.586308076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.458535 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:56 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.458713 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.539821 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549793 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.549896 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.550107 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.050075756 +0000 UTC m=+151.688033102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.589470 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.591047 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.591112 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.593414 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" containerName="collect-profiles" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.600468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.600616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.607408 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651142 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651482 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651524 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") pod \"08038447-8cce-4cea-9ef9-f7dbcce48697\" (UID: \"08038447-8cce-4cea-9ef9-f7dbcce48697\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651834 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.651901 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.652748 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.152727326 +0000 UTC m=+151.790684672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.652851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.652905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.654493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume" (OuterVolumeSpecName: "config-volume") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.668454 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k" (OuterVolumeSpecName: "kube-api-access-2cv2k") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "kube-api-access-2cv2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.668737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08038447-8cce-4cea-9ef9-f7dbcce48697" (UID: "08038447-8cce-4cea-9ef9-f7dbcce48697"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.682423 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"certified-operators-969pn\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.738634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754157 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754794 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754855 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754964 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08038447-8cce-4cea-9ef9-f7dbcce48697-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754977 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08038447-8cce-4cea-9ef9-f7dbcce48697-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.754988 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cv2k\" (UniqueName: \"kubernetes.io/projected/08038447-8cce-4cea-9ef9-f7dbcce48697-kube-api-access-2cv2k\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.755094 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.255074439 +0000 UTC m=+151.893031785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.768400 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.782370 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.784368 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.797708 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856516 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.856768 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.857167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.857676 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.357647387 +0000 UTC m=+151.995605154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.858358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.870473 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.884468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"community-operators-qrcth\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.885623 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zps75" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.937033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.940136 4766 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c75qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]log ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]etcd ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/max-in-flight-filter ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 16:24:56 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-startinformers ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 16:24:56 crc kubenswrapper[4766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 16:24:56 crc kubenswrapper[4766]: livez check failed Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.940232 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" podUID="c1191290-07ee-40c4-85e8-59545986d7db" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.957538 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.959390 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.459362063 +0000 UTC m=+152.097319409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959425 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959464 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.959650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:56 crc kubenswrapper[4766]: E0130 16:24:56.960984 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.460976466 +0000 UTC m=+152.098933812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.979705 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.981031 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:56 crc kubenswrapper[4766]: I0130 16:24:56.990430 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030321 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030410 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030321 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.030717 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061206 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.061617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: E0130 16:24:57.062242 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.562218209 +0000 UTC m=+152.200175575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.063277 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.063748 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.086327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"certified-operators-46g6x\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.117006 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166657 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.166759 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: E0130 16:24:57.168000 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:24:57.667982892 +0000 UTC m=+152.305940238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nn5q" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.193910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"e1c78ffb61691bebbf207d4c4d8b6641b5fbb8cea89e3c384d3e24825b02def1"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.194020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" event={"ID":"c71faa34-d1e9-4e10-911a-8cc1ccb436c0","Type":"ContainerStarted","Data":"5b6c2b4b02cb2f6a1c92bba7a6f40a0a08a9ff5822b2f8d45b5c528ab23e4fa4"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.202563 4766 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T16:24:56.379775515Z","Handler":null,"Name":""} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.203558 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerStarted","Data":"c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.208124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209508 4766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209544 4766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv" event={"ID":"08038447-8cce-4cea-9ef9-f7dbcce48697","Type":"ContainerDied","Data":"7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5"} Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.209627 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7363cff219ed95619e92adc9fc2c142dedc5995f1960823679028cb31e508fc5" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.223703 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vljjd" podStartSLOduration=13.223679934 podStartE2EDuration="13.223679934s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:57.217955221 +0000 UTC m=+151.855912567" watchObservedRunningTime="2026-01-30 16:24:57.223679934 +0000 UTC m=+151.861637280" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.267773 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268852 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.268963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.270026 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.270139 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.284701 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.287376 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.297685 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"community-operators-cn45b\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.331510 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.357443 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.372303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.375901 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.375947 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.433148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nn5q\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.449225 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.455099 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:57 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:57 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:57 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.455186 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.505907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.535483 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.535872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.551141 4766 patch_prober.go:28] interesting pod/console-f9d7485db-8fgxh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.551229 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" probeResult="failure" output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.587621 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.593905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.620773 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjlfz" Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.888089 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:24:57 crc kubenswrapper[4766]: I0130 16:24:57.916222 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gtc8b" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.062985 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223046 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223601 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.223777 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerStarted","Data":"5097ba380ecfee61c19e8e36f0d186a1b5b9774436685bd5dece65fcdce6e72b"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.228890 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.229246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.229317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerStarted","Data":"d647130a49f304f91277aec2b42b5513df4dbdb8a8c2d7524ca93ac92c844730"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231718 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231781 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.231809 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerStarted","Data":"89ef9d87bc4ca6e14617c5d57a66c8f3479be224d2f0014eefd70f2deeb130e1"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.232915 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.236342 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.236388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"7171a7bd52b6d6953a2848237464b826e5b11b09254d5ec8e3dc69a35f3813bf"} Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.239130 4766 generic.go:334] "Generic (PLEG): container finished" podID="ecabcf90-8bec-4268-91ea-79d333295003" containerID="c3f50aa5932a546d3c2a9d802e8a53b757d37a7fd3a543f0c4f1e28dac970b7d" exitCode=0 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.239542 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerDied","Data":"c3f50aa5932a546d3c2a9d802e8a53b757d37a7fd3a543f0c4f1e28dac970b7d"} Jan 30 16:24:58 crc kubenswrapper[4766]: W0130 16:24:58.271540 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97631abe_0d99_4f69_b208_4da9d19a8400.slice/crio-8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421 WatchSource:0}: Error finding container 8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421: Status 404 returned error can't find the container with id 8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421 Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.367371 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.368588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.371243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.380596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.453986 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:58 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:58 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:58 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.454095 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495067 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495415 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.495659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.597656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.598275 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.620207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"redhat-marketplace-qct46\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.716590 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.760866 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.762541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.778774 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.903561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:58 crc kubenswrapper[4766]: I0130 16:24:58.963197 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:24:58 crc kubenswrapper[4766]: W0130 16:24:58.978254 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f598bfe_913e_4236_b3c5_78268f38396c.slice/crio-e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef WatchSource:0}: Error finding container e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef: Status 404 returned error can't find the container with id e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.005885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.005961 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006548 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.006722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.026710 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"redhat-marketplace-mvnxb\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.097346 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerStarted","Data":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerStarted","Data":"8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.247893 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.249207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerStarted","Data":"e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.251327 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3" exitCode=0 Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.251407 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3"} Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.268708 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podStartSLOduration=133.268683323 podStartE2EDuration="2m13.268683323s" podCreationTimestamp="2026-01-30 16:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:59.268529719 +0000 UTC m=+153.906487075" watchObservedRunningTime="2026-01-30 16:24:59.268683323 +0000 UTC m=+153.906640669" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.455410 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:24:59 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:24:59 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:24:59 crc kubenswrapper[4766]: healthz check failed Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.455498 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.531854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.605556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618270 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") pod \"ecabcf90-8bec-4268-91ea-79d333295003\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618418 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") pod \"ecabcf90-8bec-4268-91ea-79d333295003\" (UID: \"ecabcf90-8bec-4268-91ea-79d333295003\") " Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.618612 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ecabcf90-8bec-4268-91ea-79d333295003" (UID: "ecabcf90-8bec-4268-91ea-79d333295003"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.625400 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ecabcf90-8bec-4268-91ea-79d333295003" (UID: "ecabcf90-8bec-4268-91ea-79d333295003"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.720349 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecabcf90-8bec-4268-91ea-79d333295003-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.720386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecabcf90-8bec-4268-91ea-79d333295003-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766116 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:24:59 crc kubenswrapper[4766]: E0130 16:24:59.766476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766500 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.766660 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecabcf90-8bec-4268-91ea-79d333295003" containerName="pruner" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.767781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.772011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.777460 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:24:59 crc kubenswrapper[4766]: I0130 16:24:59.925436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.027785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.028533 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.028546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.052213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"redhat-operators-hfpqw\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.085473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.159552 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.163091 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.180865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259121 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259717 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ecabcf90-8bec-4268-91ea-79d333295003","Type":"ContainerDied","Data":"c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c"} Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.259748 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c48420ef9a88bc82024ad36793756893ada5c464cc54bd39f64f99dae7df3f4c" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.262256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"59a8b17052ea74cbace15a032912d54f5115659fdf57ccdbf95c02e5fb2078ae"} Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.299255 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.332999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.333138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.333196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.434864 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.436069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.436389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.457738 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"redhat-operators-2gzn6\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.458066 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:00 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:00 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:00 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.458202 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.479731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.480642 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.483463 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.485882 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.486704 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.488958 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.639957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.640043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750684 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.750811 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.753777 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.775963 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:00 crc kubenswrapper[4766]: I0130 16:25:00.914325 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.165330 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:25:01 crc kubenswrapper[4766]: W0130 16:25:01.200478 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode8cf9d72_ab44_4f32_a5a5_1b1542f4aa2e.slice/crio-4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a WatchSource:0}: Error finding container 4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a: Status 404 returned error can't find the container with id 4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.291716 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20" exitCode=0 Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.291833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.297556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"56a4698fa29d8b3f31ac2d170f28bf29651c60264c984a5bcb461ab8477202c2"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.299694 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc" exitCode=0 Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.299774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.300923 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerStarted","Data":"4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.304841 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"43c7dbc686fbe2f3266fcd7cd477508571fef6f7a4153b299b92d554a111a343"} Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.454709 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:01 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:01 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:01 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.455106 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.926961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:25:01 crc kubenswrapper[4766]: I0130 16:25:01.932388 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-c75qp" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.331934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerStarted","Data":"900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.338920 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" exitCode=0 Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.339577 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.348561 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331" exitCode=0 Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.353136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331"} Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.374401 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.374366198 podStartE2EDuration="2.374366198s" podCreationTimestamp="2026-01-30 16:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:02.352999379 +0000 UTC m=+156.990956735" watchObservedRunningTime="2026-01-30 16:25:02.374366198 +0000 UTC m=+157.012323544" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.454596 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:02 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:02 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:02 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.454674 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:02 crc kubenswrapper[4766]: I0130 16:25:02.650842 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lnxcr" Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.396622 4766 generic.go:334] "Generic (PLEG): container finished" podID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerID="900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6" exitCode=0 Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.396682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerDied","Data":"900b817c578487a6545b763558d45ccc041153cc93ff17f4ddd144434df2b4e6"} Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.453054 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:03 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:03 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:03 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:03 crc kubenswrapper[4766]: I0130 16:25:03.453130 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.455908 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:04 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:04 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:04 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.456596 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.846495 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.939876 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") pod \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.939955 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") pod \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\" (UID: \"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e\") " Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.940341 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" (UID: "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:25:04 crc kubenswrapper[4766]: I0130 16:25:04.947266 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" (UID: "e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.041738 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.041778 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.453875 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:05 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:05 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:05 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.453949 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e","Type":"ContainerDied","Data":"4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a"} Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473103 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc95cd04ca115618037bfe2670dceebf65cf3d35cfcfc0e26487f0f44539d4a" Jan 30 16:25:05 crc kubenswrapper[4766]: I0130 16:25:05.473169 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:25:06 crc kubenswrapper[4766]: I0130 16:25:06.452779 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:06 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:06 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:06 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:06 crc kubenswrapper[4766]: I0130 16:25:06.452877 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.016387 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.016799 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.017305 4766 patch_prober.go:28] interesting pod/downloads-7954f5f757-254pk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.017330 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-254pk" podUID="d9f3a679-bd83-4e31-aad4-0bd228e96c33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.458089 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:07 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:07 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:07 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.458202 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.499449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.526360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de5fecf1-cb2c-4ae2-a240-6f8826f6dac3-metrics-certs\") pod \"network-metrics-daemon-xrldv\" (UID: \"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3\") " pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.535060 4766 patch_prober.go:28] interesting pod/console-f9d7485db-8fgxh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.535130 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" probeResult="failure" output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 30 16:25:07 crc kubenswrapper[4766]: I0130 16:25:07.663685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xrldv" Jan 30 16:25:08 crc kubenswrapper[4766]: I0130 16:25:08.453514 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:08 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:08 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:08 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:08 crc kubenswrapper[4766]: I0130 16:25:08.454028 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.045170 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.045258 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.452694 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:09 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:09 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:09 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:09 crc kubenswrapper[4766]: I0130 16:25:09.452793 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:10 crc kubenswrapper[4766]: I0130 16:25:10.452684 4766 patch_prober.go:28] interesting pod/router-default-5444994796-pr8gz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:25:10 crc kubenswrapper[4766]: [-]has-synced failed: reason withheld Jan 30 16:25:10 crc kubenswrapper[4766]: [+]process-running ok Jan 30 16:25:10 crc kubenswrapper[4766]: healthz check failed Jan 30 16:25:10 crc kubenswrapper[4766]: I0130 16:25:10.452769 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pr8gz" podUID="af6eef76-87a0-459c-b2eb-61e06ae7386d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:25:11 crc kubenswrapper[4766]: I0130 16:25:11.454066 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:25:11 crc kubenswrapper[4766]: I0130 16:25:11.458905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pr8gz" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.023678 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-254pk" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.277845 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xrldv"] Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.541020 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.546739 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.596330 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:25:17 crc kubenswrapper[4766]: I0130 16:25:17.632843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"4c889cfcd8437b41d24cb60bd025045f8f105ce944bfb76b9ecf3006c68a4eb0"} Jan 30 16:25:18 crc kubenswrapper[4766]: I0130 16:25:18.642152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"02cbc3afc54a125a2b594972c317d65c837dc0bd2f808eabc243042f6575b9a7"} Jan 30 16:25:27 crc kubenswrapper[4766]: I0130 16:25:27.560023 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zqpn4" Jan 30 16:25:33 crc kubenswrapper[4766]: I0130 16:25:33.957662 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.570256 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.570763 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4xn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hfpqw_openshift-marketplace(50a11a60-476d-48af-9ff9-b3d9841e6260): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:35 crc kubenswrapper[4766]: E0130 16:25:35.571932 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.073344 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:36 crc kubenswrapper[4766]: E0130 16:25:36.074719 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.074743 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.074861 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cf9d72-ab44-4f32-a5a5-1b1542f4aa2e" containerName="pruner" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.075297 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.083356 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.083645 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.087768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.133505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.133577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.235530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.258329 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:36 crc kubenswrapper[4766]: I0130 16:25:36.417649 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:39 crc kubenswrapper[4766]: I0130 16:25:39.045714 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:25:39 crc kubenswrapper[4766]: I0130 16:25:39.045805 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.156530 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.256162 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.256392 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhvw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qrcth_openshift-marketplace(ac4a36f6-21fe-4374-adaf-4505d59ce4c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:39 crc kubenswrapper[4766]: E0130 16:25:39.257843 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.217743 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.560351 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.560537 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqrl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2gzn6_openshift-marketplace(8765357c-9e53-47c7-a913-1dc72a693ef2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:40 crc kubenswrapper[4766]: E0130 16:25:40.561731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.073090 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.077497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.082684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266173 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.266231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367146 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367261 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.367280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.389203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"installer-9-crc\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:41 crc kubenswrapper[4766]: I0130 16:25:41.401536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.431491 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.534138 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.534368 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qct46_openshift-marketplace(9f598bfe-913e-4236-b3c5-78268f38396c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535392 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535532 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp6ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cn45b_openshift-marketplace(410ce027-e739-4759-a4ca-96994b5e37e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.535611 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.536666 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.546048 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.546220 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhlt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mvnxb_openshift-marketplace(bbcf0ab9-04e7-47e0-b375-c09a93463cc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.547453 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801585 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" Jan 30 16:25:42 crc kubenswrapper[4766]: E0130 16:25:42.801996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" Jan 30 16:25:42 crc kubenswrapper[4766]: I0130 16:25:42.915232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:25:42 crc kubenswrapper[4766]: W0130 16:25:42.929038 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4b30b717_ab4b_428d_8d98_f035422849b5.slice/crio-8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0 WatchSource:0}: Error finding container 8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0: Status 404 returned error can't find the container with id 8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0 Jan 30 16:25:42 crc kubenswrapper[4766]: I0130 16:25:42.996432 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:25:43 crc kubenswrapper[4766]: W0130 16:25:43.006743 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2e9f6906_38fe_44c5_9bfa_91a159d0bbb0.slice/crio-3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e WatchSource:0}: Error finding container 3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e: Status 404 returned error can't find the container with id 3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.808096 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" exitCode=0 Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.808202 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.810064 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2" exitCode=0 Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.810124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.812369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerStarted","Data":"a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.812412 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerStarted","Data":"3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.815037 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xrldv" event={"ID":"de5fecf1-cb2c-4ae2-a240-6f8826f6dac3","Type":"ContainerStarted","Data":"933df1289a819ed8ed49055ce89187d3fa29bd9c5f85fa171641c96f6ce1f3db"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.817959 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerStarted","Data":"0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.818015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerStarted","Data":"8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0"} Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.855938 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.855910317 podStartE2EDuration="2.855910317s" podCreationTimestamp="2026-01-30 16:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.850908724 +0000 UTC m=+198.488866100" watchObservedRunningTime="2026-01-30 16:25:43.855910317 +0000 UTC m=+198.493867663" Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.867787 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.867762483 podStartE2EDuration="7.867762483s" podCreationTimestamp="2026-01-30 16:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.865612995 +0000 UTC m=+198.503570351" watchObservedRunningTime="2026-01-30 16:25:43.867762483 +0000 UTC m=+198.505719829" Jan 30 16:25:43 crc kubenswrapper[4766]: I0130 16:25:43.904160 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xrldv" podStartSLOduration=178.90413845 podStartE2EDuration="2m58.90413845s" podCreationTimestamp="2026-01-30 16:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:43.89887706 +0000 UTC m=+198.536834406" watchObservedRunningTime="2026-01-30 16:25:43.90413845 +0000 UTC m=+198.542095796" Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.830081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerStarted","Data":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.834090 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerStarted","Data":"3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.836821 4766 generic.go:334] "Generic (PLEG): container finished" podID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerID="a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb" exitCode=0 Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.837470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerDied","Data":"a8efc55b7e937307fec3de34be2e9c333069230a69b06703579516d9fd5c29bb"} Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.857143 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46g6x" podStartSLOduration=2.786555888 podStartE2EDuration="48.85710492s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.232659073 +0000 UTC m=+152.870616419" lastFinishedPulling="2026-01-30 16:25:44.303208105 +0000 UTC m=+198.941165451" observedRunningTime="2026-01-30 16:25:44.850455652 +0000 UTC m=+199.488412998" watchObservedRunningTime="2026-01-30 16:25:44.85710492 +0000 UTC m=+199.495062266" Jan 30 16:25:44 crc kubenswrapper[4766]: I0130 16:25:44.877689 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-969pn" podStartSLOduration=2.89637484 podStartE2EDuration="48.877663237s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.234162594 +0000 UTC m=+152.872119940" lastFinishedPulling="2026-01-30 16:25:44.215450991 +0000 UTC m=+198.853408337" observedRunningTime="2026-01-30 16:25:44.872723725 +0000 UTC m=+199.510681071" watchObservedRunningTime="2026-01-30 16:25:44.877663237 +0000 UTC m=+199.515620583" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.136722 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") pod \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") pod \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\" (UID: \"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0\") " Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.241395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" (UID: "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.242230 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.253492 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" (UID: "2e9f6906-38fe-44c5-9bfa-91a159d0bbb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.343891 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e9f6906-38fe-44c5-9bfa-91a159d0bbb0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.739519 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.740534 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852108 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e9f6906-38fe-44c5-9bfa-91a159d0bbb0","Type":"ContainerDied","Data":"3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e"} Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.852824 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da589a8dcf475e9766957ef366b66cf2d2610eb529a2a5e4e10ae611412867e" Jan 30 16:25:46 crc kubenswrapper[4766]: I0130 16:25:46.878160 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.117578 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.117665 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:47 crc kubenswrapper[4766]: I0130 16:25:47.163221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.906779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.909335 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" exitCode=0 Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.909399 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a"} Jan 30 16:25:55 crc kubenswrapper[4766]: I0130 16:25:55.912226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.794679 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.921024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerStarted","Data":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.923134 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6" exitCode=0 Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.923210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.925475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.928063 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" exitCode=0 Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.928102 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.938083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.938125 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:25:56 crc kubenswrapper[4766]: I0130 16:25:56.979791 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrcth" podStartSLOduration=2.699943034 podStartE2EDuration="1m0.979770737s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.232720615 +0000 UTC m=+152.870677961" lastFinishedPulling="2026-01-30 16:25:56.512548318 +0000 UTC m=+211.150505664" observedRunningTime="2026-01-30 16:25:56.951938816 +0000 UTC m=+211.589896162" watchObservedRunningTime="2026-01-30 16:25:56.979770737 +0000 UTC m=+211.617728083" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.156613 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.937352 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerStarted","Data":"be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.939234 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab" exitCode=0 Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.939292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.941400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerStarted","Data":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.944129 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f" exitCode=0 Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.944225 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f"} Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.959995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hfpqw" podStartSLOduration=3.978307352 podStartE2EDuration="58.959974311s" podCreationTimestamp="2026-01-30 16:24:59 +0000 UTC" firstStartedPulling="2026-01-30 16:25:02.360618222 +0000 UTC m=+156.998575558" lastFinishedPulling="2026-01-30 16:25:57.342285171 +0000 UTC m=+211.980242517" observedRunningTime="2026-01-30 16:25:57.959685474 +0000 UTC m=+212.597642850" watchObservedRunningTime="2026-01-30 16:25:57.959974311 +0000 UTC m=+212.597931657" Jan 30 16:25:57 crc kubenswrapper[4766]: I0130 16:25:57.987755 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" probeResult="failure" output=< Jan 30 16:25:57 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:25:57 crc kubenswrapper[4766]: > Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.046163 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2gzn6" podStartSLOduration=3.009174795 podStartE2EDuration="58.046146094s" podCreationTimestamp="2026-01-30 16:25:00 +0000 UTC" firstStartedPulling="2026-01-30 16:25:02.341493994 +0000 UTC m=+156.979451340" lastFinishedPulling="2026-01-30 16:25:57.378465283 +0000 UTC m=+212.016422639" observedRunningTime="2026-01-30 16:25:58.043143514 +0000 UTC m=+212.681100850" watchObservedRunningTime="2026-01-30 16:25:58.046146094 +0000 UTC m=+212.684103440" Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.951841 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerStarted","Data":"4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.955263 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.960124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerStarted","Data":"815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb"} Jan 30 16:25:58 crc kubenswrapper[4766]: I0130 16:25:58.978425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qct46" podStartSLOduration=3.9564252399999997 podStartE2EDuration="1m0.978395153s" podCreationTimestamp="2026-01-30 16:24:58 +0000 UTC" firstStartedPulling="2026-01-30 16:25:01.296496085 +0000 UTC m=+155.934453431" lastFinishedPulling="2026-01-30 16:25:58.318465998 +0000 UTC m=+212.956423344" observedRunningTime="2026-01-30 16:25:58.973901113 +0000 UTC m=+213.611858469" watchObservedRunningTime="2026-01-30 16:25:58.978395153 +0000 UTC m=+213.616352499" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.020480 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvnxb" podStartSLOduration=3.957040588 podStartE2EDuration="1m1.020464862s" podCreationTimestamp="2026-01-30 16:24:58 +0000 UTC" firstStartedPulling="2026-01-30 16:25:01.303483222 +0000 UTC m=+155.941440568" lastFinishedPulling="2026-01-30 16:25:58.366907496 +0000 UTC m=+213.004864842" observedRunningTime="2026-01-30 16:25:59.02039133 +0000 UTC m=+213.658348686" watchObservedRunningTime="2026-01-30 16:25:59.020464862 +0000 UTC m=+213.658422208" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.098496 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.098583 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.967123 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd" exitCode=0 Jan 30 16:25:59 crc kubenswrapper[4766]: I0130 16:25:59.967209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.085677 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.085770 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.156721 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:00 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:00 crc kubenswrapper[4766]: > Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.477988 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.478272 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-46g6x" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" containerID="cri-o://4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" gracePeriod=2 Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.486835 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.486884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.853793 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975688 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" exitCode=0 Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975794 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g6x" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g6x" event={"ID":"7c0324d7-1f61-4e1a-9ce7-fd960abfe244","Type":"ContainerDied","Data":"d647130a49f304f91277aec2b42b5513df4dbdb8a8c2d7524ca93ac92c844730"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.975841 4766 scope.go:117] "RemoveContainer" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.982802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerStarted","Data":"52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058"} Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.997753 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.997861 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.998069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") pod \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\" (UID: \"7c0324d7-1f61-4e1a-9ce7-fd960abfe244\") " Jan 30 16:26:00 crc kubenswrapper[4766]: I0130 16:26:00.998737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities" (OuterVolumeSpecName: "utilities") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.005025 4766 scope.go:117] "RemoveContainer" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.009304 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn45b" podStartSLOduration=2.877244982 podStartE2EDuration="1m5.009276937s" podCreationTimestamp="2026-01-30 16:24:56 +0000 UTC" firstStartedPulling="2026-01-30 16:24:58.238533651 +0000 UTC m=+152.876490997" lastFinishedPulling="2026-01-30 16:26:00.370565606 +0000 UTC m=+215.008522952" observedRunningTime="2026-01-30 16:26:01.001703155 +0000 UTC m=+215.639660521" watchObservedRunningTime="2026-01-30 16:26:01.009276937 +0000 UTC m=+215.647234283" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.036903 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws" (OuterVolumeSpecName: "kube-api-access-wphws") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "kube-api-access-wphws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.049844 4766 scope.go:117] "RemoveContainer" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.065670 4766 scope.go:117] "RemoveContainer" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.067268 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": container with ID starting with 4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393 not found: ID does not exist" containerID="4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.067323 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393"} err="failed to get container status \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": rpc error: code = NotFound desc = could not find container \"4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393\": container with ID starting with 4cffc4b823b25da9acdf8fd2bcb8a9c77af6d9cb3020272a0800c152eff01393 not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.067375 4766 scope.go:117] "RemoveContainer" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.069373 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": container with ID starting with 97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1 not found: ID does not exist" containerID="97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.069411 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1"} err="failed to get container status \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": rpc error: code = NotFound desc = could not find container \"97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1\": container with ID starting with 97ccab51b00c2ebd3e485d91597544051c2bed561ec9c240ed8037412a2d52e1 not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.069438 4766 scope.go:117] "RemoveContainer" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: E0130 16:26:01.071319 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": container with ID starting with af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c not found: ID does not exist" containerID="af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.071363 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c"} err="failed to get container status \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": rpc error: code = NotFound desc = could not find container \"af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c\": container with ID starting with af5b6a23fadc196544b29ed44557ed01e0f50ea2f1dc14b5996b09933749bb3c not found: ID does not exist" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.081142 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0324d7-1f61-4e1a-9ce7-fd960abfe244" (UID: "7c0324d7-1f61-4e1a-9ce7-fd960abfe244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100157 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphws\" (UniqueName: \"kubernetes.io/projected/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-kube-api-access-wphws\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100257 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.100271 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0324d7-1f61-4e1a-9ce7-fd960abfe244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.146350 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:01 crc kubenswrapper[4766]: > Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.302568 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.306598 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-46g6x"] Jan 30 16:26:01 crc kubenswrapper[4766]: I0130 16:26:01.538804 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" probeResult="failure" output=< Jan 30 16:26:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:26:01 crc kubenswrapper[4766]: > Jan 30 16:26:02 crc kubenswrapper[4766]: I0130 16:26:02.047315 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" path="/var/lib/kubelet/pods/7c0324d7-1f61-4e1a-9ce7-fd960abfe244/volumes" Jan 30 16:26:06 crc kubenswrapper[4766]: I0130 16:26:06.271939 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:26:06 crc kubenswrapper[4766]: I0130 16:26:06.982356 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.029765 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.332608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.332938 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:07 crc kubenswrapper[4766]: I0130 16:26:07.372699 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.272228 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.717476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.718798 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:08 crc kubenswrapper[4766]: I0130 16:26:08.760302 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047079 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047169 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047229 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047801 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.047876 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" gracePeriod=600 Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.075490 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.140318 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:09 crc kubenswrapper[4766]: I0130 16:26:09.182913 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.083562 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.142210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.187695 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.539974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:10 crc kubenswrapper[4766]: I0130 16:26:10.593998 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.040398 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" exitCode=0 Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.040434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823"} Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.041219 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn45b" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" containerID="cri-o://52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" gracePeriod=2 Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.081068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:11 crc kubenswrapper[4766]: I0130 16:26:11.081527 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvnxb" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" containerID="cri-o://815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" gracePeriod=2 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.049846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058"} Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.049792 4766 generic.go:334] "Generic (PLEG): container finished" podID="410ce027-e739-4759-a4ca-96994b5e37e4" containerID="52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" exitCode=0 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.052962 4766 generic.go:334] "Generic (PLEG): container finished" podID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerID="815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" exitCode=0 Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.053008 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb"} Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.640152 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.757882 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.757991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.758083 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") pod \"410ce027-e739-4759-a4ca-96994b5e37e4\" (UID: \"410ce027-e739-4759-a4ca-96994b5e37e4\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.759144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities" (OuterVolumeSpecName: "utilities") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.769463 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht" (OuterVolumeSpecName: "kube-api-access-kp6ht") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "kube-api-access-kp6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.805902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.810507 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "410ce027-e739-4759-a4ca-96994b5e37e4" (UID: "410ce027-e739-4759-a4ca-96994b5e37e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860386 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860447 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp6ht\" (UniqueName: \"kubernetes.io/projected/410ce027-e739-4759-a4ca-96994b5e37e4-kube-api-access-kp6ht\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.860466 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/410ce027-e739-4759-a4ca-96994b5e37e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961462 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961549 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.961580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") pod \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\" (UID: \"bbcf0ab9-04e7-47e0-b375-c09a93463cc9\") " Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.962618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities" (OuterVolumeSpecName: "utilities") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.968497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7" (OuterVolumeSpecName: "kube-api-access-nhlt7") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "kube-api-access-nhlt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:12 crc kubenswrapper[4766]: I0130 16:26:12.984557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbcf0ab9-04e7-47e0-b375-c09a93463cc9" (UID: "bbcf0ab9-04e7-47e0-b375-c09a93463cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.062849 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhlt7\" (UniqueName: \"kubernetes.io/projected/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-kube-api-access-nhlt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.064649 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.064764 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcf0ab9-04e7-47e0-b375-c09a93463cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.063881 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn45b" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.063890 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn45b" event={"ID":"410ce027-e739-4759-a4ca-96994b5e37e4","Type":"ContainerDied","Data":"7171a7bd52b6d6953a2848237464b826e5b11b09254d5ec8e3dc69a35f3813bf"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.065041 4766 scope.go:117] "RemoveContainer" containerID="52d5ec6b6ab8d2bdb3b41676fffe38c24e44cd569cf21408fac15619934e2058" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.066903 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvnxb" event={"ID":"bbcf0ab9-04e7-47e0-b375-c09a93463cc9","Type":"ContainerDied","Data":"59a8b17052ea74cbace15a032912d54f5115659fdf57ccdbf95c02e5fb2078ae"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.066979 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvnxb" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.070958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.090324 4766 scope.go:117] "RemoveContainer" containerID="92e62e71c7fbd33706b95a91aa2eaefded0e0c9e9acefeb1a81f0225cc9e60dd" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.116365 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.119414 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn45b"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.119591 4766 scope.go:117] "RemoveContainer" containerID="7e4b12fb0e25bcc11137fa0eb3d6857be3b4209f7f96e6448f5d10662b96aeb3" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.137627 4766 scope.go:117] "RemoveContainer" containerID="815a646ec94b2437921dbafceb1d7e98aeb0ed8c4ac31b3fa67c0ac231c901cb" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.137995 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.143723 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvnxb"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.151245 4766 scope.go:117] "RemoveContainer" containerID="beca937a48be7e110f42c991300022e0146b8a35b30f49ebf2865758e9ae66ab" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.167435 4766 scope.go:117] "RemoveContainer" containerID="bd3ffb662254257b5ee19625a20b3eb5adc1c1ea60a29b9946405918cddc84cc" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.480952 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.481295 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2gzn6" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" containerID="cri-o://0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" gracePeriod=2 Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.831893 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987158 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.987249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") pod \"8765357c-9e53-47c7-a913-1dc72a693ef2\" (UID: \"8765357c-9e53-47c7-a913-1dc72a693ef2\") " Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.988540 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities" (OuterVolumeSpecName: "utilities") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:13 crc kubenswrapper[4766]: I0130 16:26:13.996756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6" (OuterVolumeSpecName: "kube-api-access-sqrl6") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "kube-api-access-sqrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.049152 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" path="/var/lib/kubelet/pods/410ce027-e739-4759-a4ca-96994b5e37e4/volumes" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.050317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" path="/var/lib/kubelet/pods/bbcf0ab9-04e7-47e0-b375-c09a93463cc9/volumes" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078604 4766 generic.go:334] "Generic (PLEG): container finished" podID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" exitCode=0 Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078712 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gzn6" event={"ID":"8765357c-9e53-47c7-a913-1dc72a693ef2","Type":"ContainerDied","Data":"43c7dbc686fbe2f3266fcd7cd477508571fef6f7a4153b299b92d554a111a343"} Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.078735 4766 scope.go:117] "RemoveContainer" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.080167 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gzn6" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.088961 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqrl6\" (UniqueName: \"kubernetes.io/projected/8765357c-9e53-47c7-a913-1dc72a693ef2-kube-api-access-sqrl6\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.089005 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.094059 4766 scope.go:117] "RemoveContainer" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.110115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8765357c-9e53-47c7-a913-1dc72a693ef2" (UID: "8765357c-9e53-47c7-a913-1dc72a693ef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.112284 4766 scope.go:117] "RemoveContainer" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128278 4766 scope.go:117] "RemoveContainer" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.128803 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": container with ID starting with 0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614 not found: ID does not exist" containerID="0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128860 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614"} err="failed to get container status \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": rpc error: code = NotFound desc = could not find container \"0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614\": container with ID starting with 0c0b309fb7fdefacf707e8856a871dd6c478a342c7967d85b53333e513264614 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.128884 4766 scope.go:117] "RemoveContainer" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.129259 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": container with ID starting with 07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3 not found: ID does not exist" containerID="07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.129325 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3"} err="failed to get container status \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": rpc error: code = NotFound desc = could not find container \"07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3\": container with ID starting with 07632b3f53a3263a57b9d0d5ae423a7490fcc02d1df21bcac6776d409abc23d3 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.129376 4766 scope.go:117] "RemoveContainer" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: E0130 16:26:14.130169 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": container with ID starting with 3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613 not found: ID does not exist" containerID="3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.130342 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613"} err="failed to get container status \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": rpc error: code = NotFound desc = could not find container \"3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613\": container with ID starting with 3b05632315ff455414402ad75d1dc4b60cee035d51e0e545e949479d0cb36613 not found: ID does not exist" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.190210 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8765357c-9e53-47c7-a913-1dc72a693ef2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.417230 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:14 crc kubenswrapper[4766]: I0130 16:26:14.420986 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2gzn6"] Jan 30 16:26:16 crc kubenswrapper[4766]: I0130 16:26:16.047941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" path="/var/lib/kubelet/pods/8765357c-9e53-47c7-a913-1dc72a693ef2/volumes" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.001812 4766 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.001907 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002712 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002886 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002933 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.002963 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.003019 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" gracePeriod=15 Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004316 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004622 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004652 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004661 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004674 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004681 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004690 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004696 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004707 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004712 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004720 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004728 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004737 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004744 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004754 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004761 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004771 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004778 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004788 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004794 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004805 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004823 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004837 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004845 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004857 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004877 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004883 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004894 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004901 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004918 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004929 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004936 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-utilities" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.004946 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.004953 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="extract-content" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005063 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005074 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005084 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005092 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005102 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcf0ab9-04e7-47e0-b375-c09a93463cc9" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005114 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0324d7-1f61-4e1a-9ce7-fd960abfe244" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005122 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005129 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8765357c-9e53-47c7-a913-1dc72a693ef2" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005139 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ce027-e739-4759-a4ca-96994b5e37e4" containerName="registry-server" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005146 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005154 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9f6906-38fe-44c5-9bfa-91a159d0bbb0" containerName="pruner" Jan 30 16:26:21 crc kubenswrapper[4766]: E0130 16:26:21.005285 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.005294 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.006647 4766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.007230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.013482 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193565 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193842 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193891 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.193946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.194002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.295921 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296025 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296165 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296228 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:21 crc kubenswrapper[4766]: I0130 16:26:21.296206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.127995 4766 generic.go:334] "Generic (PLEG): container finished" podID="4b30b717-ab4b-428d-8d98-f035422849b5" containerID="0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.128095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerDied","Data":"0af9e4eb5943a3ef897af4faec4286f4a02c813f78a0ed3cf7d1ba829b602751"} Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.129101 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.131245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.133317 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134366 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134399 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134413 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" exitCode=0 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134427 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" exitCode=2 Jan 30 16:26:22 crc kubenswrapper[4766]: I0130 16:26:22.134546 4766 scope.go:117] "RemoveContainer" containerID="5f73a8da3c1b7b445fa2027fd14f91722861faa65068558ac6248fe39882a036" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.149103 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.453657 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.454765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.455403 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.455742 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.495547 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.496250 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.496873 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630914 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630944 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630993 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.630988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631068 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") pod \"4b30b717-ab4b-428d-8d98-f035422849b5\" (UID: \"4b30b717-ab4b-428d-8d98-f035422849b5\") " Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631165 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631354 4766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631368 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631379 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631388 4766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.631416 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock" (OuterVolumeSpecName: "var-lock") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.636912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b30b717-ab4b-428d-8d98-f035422849b5" (UID: "4b30b717-ab4b-428d-8d98-f035422849b5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.732442 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b30b717-ab4b-428d-8d98-f035422849b5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:23 crc kubenswrapper[4766]: I0130 16:26:23.732509 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b30b717-ab4b-428d-8d98-f035422849b5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.045733 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.159811 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160445 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" exitCode=0 Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160535 4766 scope.go:117] "RemoveContainer" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.160745 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.161574 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.162152 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.163641 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.163914 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4b30b717-ab4b-428d-8d98-f035422849b5","Type":"ContainerDied","Data":"8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0"} Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165790 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea4803c398e8dd2c45bccc8f3bf98bb77923f9cc01db78303b4c730dab253c0" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.165265 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.168508 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.168709 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.181242 4766 scope.go:117] "RemoveContainer" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.201896 4766 scope.go:117] "RemoveContainer" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.217370 4766 scope.go:117] "RemoveContainer" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.231363 4766 scope.go:117] "RemoveContainer" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.248032 4766 scope.go:117] "RemoveContainer" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.271858 4766 scope.go:117] "RemoveContainer" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.273620 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": container with ID starting with 0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1 not found: ID does not exist" containerID="0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.273653 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1"} err="failed to get container status \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": rpc error: code = NotFound desc = could not find container \"0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1\": container with ID starting with 0c9ccb8f810b0dc50bc3cd19bc2bd86a24032fc7b97b03e1c9b8cf73b970bcb1 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.273696 4766 scope.go:117] "RemoveContainer" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.273982 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": container with ID starting with d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9 not found: ID does not exist" containerID="d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274031 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9"} err="failed to get container status \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": rpc error: code = NotFound desc = could not find container \"d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9\": container with ID starting with d5f851a814c9eabcddf056dc6bcc7bf6d10a97aa5f5553205e1aa2d0119ae7f9 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274051 4766 scope.go:117] "RemoveContainer" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.274420 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": container with ID starting with d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334 not found: ID does not exist" containerID="d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274466 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334"} err="failed to get container status \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": rpc error: code = NotFound desc = could not find container \"d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334\": container with ID starting with d98a7c574dcc5df4de2cd3992b6d5897f57c0ee5e47c11c8790013adc17c7334 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.274495 4766 scope.go:117] "RemoveContainer" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.275252 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": container with ID starting with f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc not found: ID does not exist" containerID="f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275274 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc"} err="failed to get container status \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": rpc error: code = NotFound desc = could not find container \"f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc\": container with ID starting with f3f001a40dac5ddcd236d8b2015ef45b23a90b7c0dca990970ddc307c4284acc not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275290 4766 scope.go:117] "RemoveContainer" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.275682 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": container with ID starting with 5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41 not found: ID does not exist" containerID="5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275719 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41"} err="failed to get container status \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": rpc error: code = NotFound desc = could not find container \"5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41\": container with ID starting with 5fc7ce97db94898b1ce6f5dd86ea87456e8f1666f8c529aa07117c3d76709e41 not found: ID does not exist" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.275736 4766 scope.go:117] "RemoveContainer" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: E0130 16:26:24.276125 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": container with ID starting with a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045 not found: ID does not exist" containerID="a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045" Jan 30 16:26:24 crc kubenswrapper[4766]: I0130 16:26:24.276145 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045"} err="failed to get container status \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": rpc error: code = NotFound desc = could not find container \"a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045\": container with ID starting with a1792b8044f88a3816ab95d8d0ee3a0c287c7e065fc7ac6b0ba33b0a39596045 not found: ID does not exist" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.039603 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.040100 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.042815 4766 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.043234 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.081831 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8efab9447208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,LastTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.127079 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.128217 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.128619 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.129239 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.129976 4766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.130002 4766 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.130328 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Jan 30 16:26:26 crc kubenswrapper[4766]: I0130 16:26:26.183128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"49fecd8be2a6c4bf752c52a3d9142162f9f7dac36faeba708d06ab3a53e06d87"} Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.331370 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Jan 30 16:26:26 crc kubenswrapper[4766]: E0130 16:26:26.732580 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Jan 30 16:26:27 crc kubenswrapper[4766]: I0130 16:26:27.191035 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2"} Jan 30 16:26:27 crc kubenswrapper[4766]: E0130 16:26:27.192251 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:27 crc kubenswrapper[4766]: I0130 16:26:27.192282 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:27 crc kubenswrapper[4766]: E0130 16:26:27.534051 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Jan 30 16:26:28 crc kubenswrapper[4766]: E0130 16:26:28.196379 4766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:26:29 crc kubenswrapper[4766]: E0130 16:26:29.135783 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Jan 30 16:26:29 crc kubenswrapper[4766]: E0130 16:26:29.739567 4766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8efab9447208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,LastTimestamp:2026-01-30 16:26:26.081133064 +0000 UTC m=+240.719090410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.302046 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" containerID="cri-o://c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" gracePeriod=15 Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.659901 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.661108 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.661625 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.842907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.842991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843027 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843084 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843138 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843240 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843308 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843400 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843436 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.843550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") pod \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\" (UID: \"21a8aae5-a6f8-43e0-ab59-1e6af94eb133\") " Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844764 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.844983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.845317 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.850919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9" (OuterVolumeSpecName: "kube-api-access-mntd9") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "kube-api-access-mntd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851242 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.850965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851483 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851633 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.851892 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.852190 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.852438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.853231 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21a8aae5-a6f8-43e0-ab59-1e6af94eb133" (UID: "21a8aae5-a6f8-43e0-ab59-1e6af94eb133"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945324 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mntd9\" (UniqueName: \"kubernetes.io/projected/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-kube-api-access-mntd9\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945376 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945395 4766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945409 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945425 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945440 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945453 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945467 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945479 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945496 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945509 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945523 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945536 4766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:31 crc kubenswrapper[4766]: I0130 16:26:31.945549 4766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21a8aae5-a6f8-43e0-ab59-1e6af94eb133-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217235 4766 generic.go:334] "Generic (PLEG): container finished" podID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" exitCode=0 Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217289 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerDied","Data":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217303 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" event={"ID":"21a8aae5-a6f8-43e0-ab59-1e6af94eb133","Type":"ContainerDied","Data":"a6184cf8b16957ad6df32ef60f66d31e49cd6a8b7088d60d3d7abeb822aa03d8"} Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217343 4766 scope.go:117] "RemoveContainer" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.217975 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.218171 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.221196 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.221737 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.234838 4766 scope.go:117] "RemoveContainer" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: E0130 16:26:32.235230 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": container with ID starting with c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1 not found: ID does not exist" containerID="c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1" Jan 30 16:26:32 crc kubenswrapper[4766]: I0130 16:26:32.235266 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1"} err="failed to get container status \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": rpc error: code = NotFound desc = could not find container \"c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1\": container with ID starting with c2295bbeccc7cdf5226b500992026187a8b3be7c58a18b6a59edac5d0c9bd3b1 not found: ID does not exist" Jan 30 16:26:32 crc kubenswrapper[4766]: E0130 16:26:32.336476 4766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="6.4s" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.038572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.040332 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.040946 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.055943 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.055985 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:34 crc kubenswrapper[4766]: E0130 16:26:34.056592 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.057244 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:34 crc kubenswrapper[4766]: I0130 16:26:34.233819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05a9592f0076932bed52a64c5174d7b0290219dfa2f88db228313205be00c92e"} Jan 30 16:26:35 crc kubenswrapper[4766]: E0130 16:26:35.102881 4766 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" volumeName="registry-storage" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.241937 4766 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="eaab36321c3200e0f1d677b1d444f633b389bf5abbfdcff7bbab0ae863bc87a6" exitCode=0 Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"eaab36321c3200e0f1d677b1d444f633b389bf5abbfdcff7bbab0ae863bc87a6"} Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242307 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242335 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.242807 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: E0130 16:26:35.242931 4766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.243041 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246648 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246698 4766 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38" exitCode=1 Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.246729 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38"} Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247208 4766 scope.go:117] "RemoveContainer" containerID="6c92ca4605476ce3aa32fea4f5c20649c09c0dff1518d3c33e32c1807c2e4d38" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247564 4766 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.247906 4766 status_manager.go:851] "Failed to get status for pod" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" pod="openshift-authentication/oauth-openshift-558db77b4-sbckt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sbckt\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.248138 4766 status_manager.go:851] "Failed to get status for pod" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Jan 30 16:26:35 crc kubenswrapper[4766]: I0130 16:26:35.905458 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265505 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c8a6228825cdad02fd214b175dcfd4582cc31eb4021a6fa3da99e1e9e20dbb2"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265914 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbb84e0a697372a9b6c4917135f4e27f7c946c9427f13b11debe3917ddb7730a"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d41387ed76bbba54ab16e4a8774a0fc8ea422811b9fb4e6eb0b367421314405"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.265942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85f2730b6c81494e30bb54b9d6db46018c3c1b38f70ac1667a882db1b7548b47"} Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.270032 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:26:36 crc kubenswrapper[4766]: I0130 16:26:36.270087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66412e2a523a7faab5a9a322c702486daeda620792156ede9e963c0f09763795"} Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281166 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1252afe6e1cf63b6d3b7b9258560b2ede202b7c18a267d16316c042d9ec9db26"} Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281631 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281797 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:37 crc kubenswrapper[4766]: I0130 16:26:37.281749 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.057895 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.058487 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.063061 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:39 crc kubenswrapper[4766]: I0130 16:26:39.265087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:42 crc kubenswrapper[4766]: I0130 16:26:42.297732 4766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.313981 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.314038 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.319847 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:26:43 crc kubenswrapper[4766]: I0130 16:26:43.323464 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaea65f6-cc7c-4398-a46b-87c70da9698e" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.319455 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.319487 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:26:44 crc kubenswrapper[4766]: I0130 16:26:44.324006 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaea65f6-cc7c-4398-a46b-87c70da9698e" Jan 30 16:26:45 crc kubenswrapper[4766]: I0130 16:26:45.905881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:45 crc kubenswrapper[4766]: I0130 16:26:45.909872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:46 crc kubenswrapper[4766]: I0130 16:26:46.336406 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:26:51 crc kubenswrapper[4766]: I0130 16:26:51.994274 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.177314 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.306798 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:26:52 crc kubenswrapper[4766]: I0130 16:26:52.898144 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.254703 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.445268 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.483696 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:26:53 crc kubenswrapper[4766]: I0130 16:26:53.776707 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.195846 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.406952 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.529693 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.697112 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.894888 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:26:54 crc kubenswrapper[4766]: I0130 16:26:54.919138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.000819 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.599727 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.627747 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.732786 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.736881 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.961636 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:26:55 crc kubenswrapper[4766]: I0130 16:26:55.963570 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.145324 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.224658 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.302678 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.334607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.350458 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.355949 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.419076 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.427481 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.430863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.452462 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.493123 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.522045 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.672258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.697016 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.753786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.883509 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.891062 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.948912 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.966525 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.976930 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:26:56 crc kubenswrapper[4766]: I0130 16:26:56.997588 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.065853 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.102283 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.271453 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.301231 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.324766 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.379962 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.456258 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.485956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.495020 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.499972 4766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.534723 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.548572 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.598999 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.641702 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.681621 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.696208 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.716579 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.737973 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.744157 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.819081 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:26:57 crc kubenswrapper[4766]: I0130 16:26:57.861759 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.036007 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.069399 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.136108 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.145764 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.150729 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.209784 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.241712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.251482 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.426425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.441775 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.478959 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.740810 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.826165 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.877345 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.879676 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.885709 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.902407 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:26:58 crc kubenswrapper[4766]: I0130 16:26:58.993841 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.023938 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.044138 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.074297 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.087128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.201511 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.210375 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.315660 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.329811 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.349337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.391636 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.399780 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.495701 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.557046 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.581422 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.592492 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.616008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.617478 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.690398 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.698317 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.713600 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.720583 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.802689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.947448 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:26:59 crc kubenswrapper[4766]: I0130 16:26:59.979033 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.086246 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.134044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.167410 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.194139 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.194584 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.200809 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.222091 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.273835 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.298127 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.376318 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.392066 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.436919 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.451817 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.458489 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.482928 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.490781 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.501387 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.558163 4766 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.587358 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.693893 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.804357 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.827140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:27:00 crc kubenswrapper[4766]: I0130 16:27:00.986221 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.003623 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.082020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.299350 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.340742 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.452906 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.468006 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.678853 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.756798 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.761014 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.769661 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:27:01 crc kubenswrapper[4766]: I0130 16:27:01.993844 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.023202 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.057257 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.177107 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.235316 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.238061 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.289843 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.314128 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.365916 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.380487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.423380 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.435579 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.470224 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.495060 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.544197 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.880461 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.890388 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.924620 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:27:02 crc kubenswrapper[4766]: I0130 16:27:02.979625 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.015883 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.016143 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.019520 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.088632 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.118752 4766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.134315 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.140515 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.183337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.222909 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.250992 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.274486 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.316739 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.338627 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.348404 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.500960 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.642901 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.673068 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.686150 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.691124 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.696497 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.776846 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.777369 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.851279 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.873338 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.932982 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:27:03 crc kubenswrapper[4766]: I0130 16:27:03.981123 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.034084 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.099672 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.192542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.306166 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.397958 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.666500 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.761159 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.787695 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.791608 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.802925 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.841352 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.872452 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:27:04 crc kubenswrapper[4766]: I0130 16:27:04.894066 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.071396 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.087920 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.135736 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.364258 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.544064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.632936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.695783 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.739264 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.748938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.784075 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:27:05 crc kubenswrapper[4766]: I0130 16:27:05.988054 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.136496 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.167673 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.286115 4766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291196 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-sbckt"] Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291273 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6fffd54687-fl5rm"] Jan 30 16:27:06 crc kubenswrapper[4766]: E0130 16:27:06.291500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291520 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: E0130 16:27:06.291532 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291643 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b30b717-ab4b-428d-8d98-f035422849b5" containerName="installer" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291655 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" containerName="oauth-openshift" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291785 4766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.291829 4766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a866f582-e240-4058-a5ab-7c73e33d80fa" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.292150 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295204 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295220 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295472 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295563 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.295758 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296048 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296073 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296230 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.296368 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.297998 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.298065 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.298094 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.308474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.309198 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.319655 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.346154 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.3461338 podStartE2EDuration="24.3461338s" podCreationTimestamp="2026-01-30 16:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:06.345921194 +0000 UTC m=+280.983878560" watchObservedRunningTime="2026-01-30 16:27:06.3461338 +0000 UTC m=+280.984091146" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.365978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.366053 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408379 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408521 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408825 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.408994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409087 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.409145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.430486 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.480587 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.498536 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510611 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510751 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510788 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510852 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.510869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.511628 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-dir\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512137 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.512923 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dfb08685-43c0-4cd6-bb82-51f5df825923-audit-policies\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.519922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.520051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.520412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-session\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.521700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-error\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.522112 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.528633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dfb08685-43c0-4cd6-bb82-51f5df825923-v4-0-config-user-template-login\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.531456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z6l7\" (UniqueName: \"kubernetes.io/projected/dfb08685-43c0-4cd6-bb82-51f5df825923-kube-api-access-6z6l7\") pod \"oauth-openshift-6fffd54687-fl5rm\" (UID: \"dfb08685-43c0-4cd6-bb82-51f5df825923\") " pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.616969 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.659904 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.865299 4766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.865364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fffd54687-fl5rm"] Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.926804 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:27:06 crc kubenswrapper[4766]: I0130 16:27:06.983143 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.104669 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.381666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.426343 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.447665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" event={"ID":"dfb08685-43c0-4cd6-bb82-51f5df825923","Type":"ContainerStarted","Data":"19faad2de142e6eb25b9f845611d4223a106b12e69c6bf20e7bcff9c8b2fa028"} Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.447728 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" event={"ID":"dfb08685-43c0-4cd6-bb82-51f5df825923","Type":"ContainerStarted","Data":"6cd1b270a44652628af6ab31f77d6e4512e027ce67447faaf88f8341b03fe40b"} Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.448100 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.473058 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" podStartSLOduration=61.473026146 podStartE2EDuration="1m1.473026146s" podCreationTimestamp="2026-01-30 16:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:07.469150563 +0000 UTC m=+282.107107939" watchObservedRunningTime="2026-01-30 16:27:07.473026146 +0000 UTC m=+282.110983492" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.528169 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.594849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.595077 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.632418 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.722993 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.744399 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.770519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:27:07 crc kubenswrapper[4766]: I0130 16:27:07.837643 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.031600 4766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.047848 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a8aae5-a6f8-43e0-ab59-1e6af94eb133" path="/var/lib/kubelet/pods/21a8aae5-a6f8-43e0-ab59-1e6af94eb133/volumes" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.297810 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.326978 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.608621 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.699767 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:27:08 crc kubenswrapper[4766]: I0130 16:27:08.757833 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:27:09 crc kubenswrapper[4766]: I0130 16:27:09.476902 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:27:11 crc kubenswrapper[4766]: I0130 16:27:11.672822 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:27:16 crc kubenswrapper[4766]: I0130 16:27:16.200880 4766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:27:16 crc kubenswrapper[4766]: I0130 16:27:16.201439 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" gracePeriod=5 Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.521904 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.522268 4766 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" exitCode=137 Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.769328 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.769863 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818504 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818570 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818663 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818688 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818951 4766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818968 4766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818978 4766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.818988 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.826760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:27:21 crc kubenswrapper[4766]: I0130 16:27:21.919493 4766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.045970 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.530821 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.530901 4766 scope.go:117] "RemoveContainer" containerID="1fd1b478d8e899504c0fde3f05b01dd9e95e984e187c19d6fb8a7235d9242bd2" Jan 30 16:27:22 crc kubenswrapper[4766]: I0130 16:27:22.531064 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.416278 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.416629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" containerID="cri-o://cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" gracePeriod=30 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.513294 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.514051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" containerID="cri-o://4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" gracePeriod=30 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.556659 4766 generic.go:334] "Generic (PLEG): container finished" podID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" exitCode=0 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.556798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada"} Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.557428 4766 scope.go:117] "RemoveContainer" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.564745 4766 generic.go:334] "Generic (PLEG): container finished" podID="807df97f-b371-4d04-81e9-b1a823a8a638" containerID="cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" exitCode=0 Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.564793 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerDied","Data":"cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1"} Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.700083 4766 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mfclt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.700197 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.837106 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.857821 4766 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.871998 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.872031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") pod \"807df97f-b371-4d04-81e9-b1a823a8a638\" (UID: \"807df97f-b371-4d04-81e9-b1a823a8a638\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.872560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca" (OuterVolumeSpecName: "client-ca") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.873120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config" (OuterVolumeSpecName: "config") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.873314 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.881518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.881652 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv" (OuterVolumeSpecName: "kube-api-access-5zmsv") pod "807df97f-b371-4d04-81e9-b1a823a8a638" (UID: "807df97f-b371-4d04-81e9-b1a823a8a638"). InnerVolumeSpecName "kube-api-access-5zmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.907734 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973246 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973433 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973464 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") pod \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\" (UID: \"798137fc-1490-4b1c-ac4d-77b6c9e56d05\") " Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973743 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zmsv\" (UniqueName: \"kubernetes.io/projected/807df97f-b371-4d04-81e9-b1a823a8a638-kube-api-access-5zmsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973774 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973788 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973804 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/807df97f-b371-4d04-81e9-b1a823a8a638-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.973817 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/807df97f-b371-4d04-81e9-b1a823a8a638-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.974541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca" (OuterVolumeSpecName: "client-ca") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.974729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config" (OuterVolumeSpecName: "config") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.977872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:25 crc kubenswrapper[4766]: I0130 16:27:25.977988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j" (OuterVolumeSpecName: "kube-api-access-ks54j") pod "798137fc-1490-4b1c-ac4d-77b6c9e56d05" (UID: "798137fc-1490-4b1c-ac4d-77b6c9e56d05"). InnerVolumeSpecName "kube-api-access-ks54j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074772 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks54j\" (UniqueName: \"kubernetes.io/projected/798137fc-1490-4b1c-ac4d-77b6c9e56d05-kube-api-access-ks54j\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074809 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074820 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/798137fc-1490-4b1c-ac4d-77b6c9e56d05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.074832 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/798137fc-1490-4b1c-ac4d-77b6c9e56d05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.571912 4766 generic.go:334] "Generic (PLEG): container finished" podID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" exitCode=0 Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.571978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerDied","Data":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572030 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt" event={"ID":"798137fc-1490-4b1c-ac4d-77b6c9e56d05","Type":"ContainerDied","Data":"777f165aaa35e8debb71a11164cf2e0013257285fafc5c165738c7722a8711a4"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.572049 4766 scope.go:117] "RemoveContainer" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.575381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerStarted","Data":"b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.575778 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.578154 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.579750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" event={"ID":"807df97f-b371-4d04-81e9-b1a823a8a638","Type":"ContainerDied","Data":"442796fe00494142d89b0e1b9d6820cd3ac80019a54bf8a35e0ec68f7d85bbbf"} Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.579815 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgkvz" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.594846 4766 scope.go:117] "RemoveContainer" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.595979 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:26 crc kubenswrapper[4766]: E0130 16:27:26.596031 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": container with ID starting with 4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0 not found: ID does not exist" containerID="4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.596106 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0"} err="failed to get container status \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": rpc error: code = NotFound desc = could not find container \"4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0\": container with ID starting with 4921083d2193553df918df075b998b9501b901f014de50a516ea94d274b8abf0 not found: ID does not exist" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.596265 4766 scope.go:117] "RemoveContainer" containerID="cdc8f66f787e17b15a0e7454e23799f03cb73f4271321de8e857fb5adbb8d6e1" Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.605268 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mfclt"] Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.628900 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:26 crc kubenswrapper[4766]: I0130 16:27:26.633519 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgkvz"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.369534 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370160 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370263 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370345 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370402 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: E0130 16:27:27.370466 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370683 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370746 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" containerName="controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.370812 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" containerName="route-controller-manager" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.371337 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.374534 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.375631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.375904 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.377461 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.378001 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.378397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380156 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380435 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380624 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.380940 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.381777 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.381931 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.382040 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.382419 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.386653 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.386785 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.390661 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391212 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391253 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391384 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391698 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.391796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493780 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493882 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493915 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.493970 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.494005 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-client-ca\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.495776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-config\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.496295 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.501348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-serving-cert\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.507941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.513390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsl4\" (UniqueName: \"kubernetes.io/projected/65ec52f3-f575-4a70-ad65-a7cce55ba3bd-kube-api-access-klsl4\") pod \"route-controller-manager-7f7c67755b-6mn4d\" (UID: \"65ec52f3-f575-4a70-ad65-a7cce55ba3bd\") " pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.514759 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"controller-manager-9f999584f-bwdvp\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.689723 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.699844 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.874489 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:27 crc kubenswrapper[4766]: I0130 16:27:27.906667 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d"] Jan 30 16:27:27 crc kubenswrapper[4766]: W0130 16:27:27.917596 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ec52f3_f575_4a70_ad65_a7cce55ba3bd.slice/crio-ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb WatchSource:0}: Error finding container ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb: Status 404 returned error can't find the container with id ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.047105 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798137fc-1490-4b1c-ac4d-77b6c9e56d05" path="/var/lib/kubelet/pods/798137fc-1490-4b1c-ac4d-77b6c9e56d05/volumes" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.048067 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807df97f-b371-4d04-81e9-b1a823a8a638" path="/var/lib/kubelet/pods/807df97f-b371-4d04-81e9-b1a823a8a638/volumes" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.597790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerStarted","Data":"1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.598106 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.598118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerStarted","Data":"779aa30092a73b8f0ead09d3638ab33c4bdd98e3a50ef1e6f57c47c69049b23a"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.600396 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" event={"ID":"65ec52f3-f575-4a70-ad65-a7cce55ba3bd","Type":"ContainerStarted","Data":"65270142116f308163aab3be005a4bf9c3c613fc78e5a00d1ac0575954c96b31"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.600445 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" event={"ID":"65ec52f3-f575-4a70-ad65-a7cce55ba3bd","Type":"ContainerStarted","Data":"ec08d796f2b7bb87253867702d82a73f0c3eccb5beee5c885794f2ae843306cb"} Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.603638 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:28 crc kubenswrapper[4766]: I0130 16:27:28.618453 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" podStartSLOduration=3.618434208 podStartE2EDuration="3.618434208s" podCreationTimestamp="2026-01-30 16:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:28.613963119 +0000 UTC m=+303.251920465" watchObservedRunningTime="2026-01-30 16:27:28.618434208 +0000 UTC m=+303.256391554" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.605905 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.611075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" Jan 30 16:27:29 crc kubenswrapper[4766]: I0130 16:27:29.632942 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f7c67755b-6mn4d" podStartSLOduration=4.6329231140000005 podStartE2EDuration="4.632923114s" podCreationTimestamp="2026-01-30 16:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:28.65761658 +0000 UTC m=+303.295573946" watchObservedRunningTime="2026-01-30 16:27:29.632923114 +0000 UTC m=+304.270880470" Jan 30 16:27:31 crc kubenswrapper[4766]: I0130 16:27:31.304793 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:31 crc kubenswrapper[4766]: I0130 16:27:31.615215 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" containerID="cri-o://1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" gracePeriod=30 Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.690769 4766 generic.go:334] "Generic (PLEG): container finished" podID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerID="1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" exitCode=0 Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.691093 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerDied","Data":"1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449"} Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.875174 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.903669 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:32 crc kubenswrapper[4766]: E0130 16:27:32.903955 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.903978 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.904121 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" containerName="controller-manager" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.904635 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.924227 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.977616 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978413 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978473 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") pod \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\" (UID: \"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b\") " Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978824 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.978975 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca" (OuterVolumeSpecName: "client-ca") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.979219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config" (OuterVolumeSpecName: "config") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.983882 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz" (OuterVolumeSpecName: "kube-api-access-qgvcz") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "kube-api-access-qgvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:32 crc kubenswrapper[4766]: I0130 16:27:32.984533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" (UID: "35a5ead8-3b9f-4ac8-9266-3dc405b7c80b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080917 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080928 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvcz\" (UniqueName: \"kubernetes.io/projected/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-kube-api-access-qgvcz\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080941 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080950 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.080958 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.082808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.083632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.084200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.085597 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.100294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"controller-manager-d55469fcf-485sj\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.225662 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.450730 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" event={"ID":"35a5ead8-3b9f-4ac8-9266-3dc405b7c80b","Type":"ContainerDied","Data":"779aa30092a73b8f0ead09d3638ab33c4bdd98e3a50ef1e6f57c47c69049b23a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699525 4766 scope.go:117] "RemoveContainer" containerID="1bca5c1041071b4b73c2ca9a76efeda879c0c0766a6198ad9b35a9d7a5432449" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.699422 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-bwdvp" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.705397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerStarted","Data":"d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.705444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerStarted","Data":"e58fbe7996a8ff003a2b6f7f74a31d396be00251f43d6d9bee24d2bba733d54a"} Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.707847 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.707970 4766 patch_prober.go:28] interesting pod/controller-manager-d55469fcf-485sj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.708019 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.750904 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podStartSLOduration=2.750877336 podStartE2EDuration="2.750877336s" podCreationTimestamp="2026-01-30 16:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:33.733594657 +0000 UTC m=+308.371552023" watchObservedRunningTime="2026-01-30 16:27:33.750877336 +0000 UTC m=+308.388834702" Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.764691 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:33 crc kubenswrapper[4766]: I0130 16:27:33.776524 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-bwdvp"] Jan 30 16:27:34 crc kubenswrapper[4766]: I0130 16:27:34.046253 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a5ead8-3b9f-4ac8-9266-3dc405b7c80b" path="/var/lib/kubelet/pods/35a5ead8-3b9f-4ac8-9266-3dc405b7c80b/volumes" Jan 30 16:27:34 crc kubenswrapper[4766]: I0130 16:27:34.718232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.005173 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.006131 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-969pn" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" containerID="cri-o://3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.017256 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.017522 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrcth" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" containerID="cri-o://06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.031292 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.031576 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" containerID="cri-o://b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.041333 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.042031 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qct46" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" containerID="cri-o://4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.055900 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.056272 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hfpqw" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" containerID="cri-o://be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" gracePeriod=30 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.062508 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.063759 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.066815 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162863 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.162933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.264645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.266263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.276166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b001665-9e64-4f29-b35f-5f702206ae07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.284653 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b996b\" (UniqueName: \"kubernetes.io/projected/2b001665-9e64-4f29-b35f-5f702206ae07-kube-api-access-b996b\") pod \"marketplace-operator-79b997595-rwhkx\" (UID: \"2b001665-9e64-4f29-b35f-5f702206ae07\") " pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.537563 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.544612 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568437 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.568537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") pod \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\" (UID: \"ac4a36f6-21fe-4374-adaf-4505d59ce4c5\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.570413 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities" (OuterVolumeSpecName: "utilities") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.584774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8" (OuterVolumeSpecName: "kube-api-access-fhvw8") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "kube-api-access-fhvw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac4a36f6-21fe-4374-adaf-4505d59ce4c5" (UID: "ac4a36f6-21fe-4374-adaf-4505d59ce4c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670481 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvw8\" (UniqueName: \"kubernetes.io/projected/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-kube-api-access-fhvw8\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670497 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.670509 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a36f6-21fe-4374-adaf-4505d59ce4c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.747106 4766 generic.go:334] "Generic (PLEG): container finished" podID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerID="be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.747381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751736 4766 generic.go:334] "Generic (PLEG): container finished" podID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751854 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrcth" event={"ID":"ac4a36f6-21fe-4374-adaf-4505d59ce4c5","Type":"ContainerDied","Data":"5097ba380ecfee61c19e8e36f0d186a1b5b9774436685bd5dece65fcdce6e72b"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrcth" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.751918 4766 scope.go:117] "RemoveContainer" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.759091 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f598bfe-913e-4236-b3c5-78268f38396c" containerID="4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.759172 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.761311 4766 generic.go:334] "Generic (PLEG): container finished" podID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerID="b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.761392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.783747 4766 generic.go:334] "Generic (PLEG): container finished" podID="f55dc373-49c6-4b05-a945-79614dc282d8" containerID="3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" exitCode=0 Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.783806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d"} Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.809612 4766 scope.go:117] "RemoveContainer" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.813933 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.822009 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrcth"] Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.839452 4766 scope.go:117] "RemoveContainer" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.855851 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858446 4766 scope.go:117] "RemoveContainer" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.858781 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": container with ID starting with 06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b not found: ID does not exist" containerID="06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858815 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b"} err="failed to get container status \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": rpc error: code = NotFound desc = could not find container \"06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b\": container with ID starting with 06c47e9228d7637aaf17f2cb5cbd46136e93dada97e45b28a37e4ae451827e8b not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.858840 4766 scope.go:117] "RemoveContainer" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.859414 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": container with ID starting with 5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a not found: ID does not exist" containerID="5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859472 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a"} err="failed to get container status \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": rpc error: code = NotFound desc = could not find container \"5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a\": container with ID starting with 5e2714807bc4ce8fb75ec066ec680c820dce572b55a6c11d3376cbf3349b0a9a not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859488 4766 scope.go:117] "RemoveContainer" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: E0130 16:27:39.859881 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": container with ID starting with 9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d not found: ID does not exist" containerID="9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859903 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d"} err="failed to get container status \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": rpc error: code = NotFound desc = could not find container \"9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d\": container with ID starting with 9ac850580190254e68ddee9b089f6bc2d6e691ff6d344eda4986105f7c8ca18d not found: ID does not exist" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.859916 4766 scope.go:117] "RemoveContainer" containerID="9baf130b02720b533f5cfa486ecbaff1522a0002fe7c262131847af34db02ada" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873862 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.873956 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") pod \"f55dc373-49c6-4b05-a945-79614dc282d8\" (UID: \"f55dc373-49c6-4b05-a945-79614dc282d8\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.875814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities" (OuterVolumeSpecName: "utilities") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.884040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r" (OuterVolumeSpecName: "kube-api-access-nc27r") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "kube-api-access-nc27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.900201 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.902421 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.903623 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.974962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975143 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975232 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") pod \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\" (UID: \"cdbd0f5d-e6fb-4960-a928-7a5dcc399239\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") pod \"9f598bfe-913e-4236-b3c5-78268f38396c\" (UID: \"9f598bfe-913e-4236-b3c5-78268f38396c\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") pod \"50a11a60-476d-48af-9ff9-b3d9841e6260\" (UID: \"50a11a60-476d-48af-9ff9-b3d9841e6260\") " Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975568 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.975584 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc27r\" (UniqueName: \"kubernetes.io/projected/f55dc373-49c6-4b05-a945-79614dc282d8-kube-api-access-nc27r\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976756 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities" (OuterVolumeSpecName: "utilities") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.976879 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f55dc373-49c6-4b05-a945-79614dc282d8" (UID: "f55dc373-49c6-4b05-a945-79614dc282d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.982875 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities" (OuterVolumeSpecName: "utilities") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.985040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.987784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8" (OuterVolumeSpecName: "kube-api-access-h4xn8") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "kube-api-access-h4xn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.988562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4" (OuterVolumeSpecName: "kube-api-access-6gqt4") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "kube-api-access-6gqt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:39 crc kubenswrapper[4766]: I0130 16:27:39.988680 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t" (OuterVolumeSpecName: "kube-api-access-h4d8t") pod "cdbd0f5d-e6fb-4960-a928-7a5dcc399239" (UID: "cdbd0f5d-e6fb-4960-a928-7a5dcc399239"). InnerVolumeSpecName "kube-api-access-h4d8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.022214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f598bfe-913e-4236-b3c5-78268f38396c" (UID: "9f598bfe-913e-4236-b3c5-78268f38396c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.046531 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" path="/var/lib/kubelet/pods/ac4a36f6-21fe-4374-adaf-4505d59ce4c5/volumes" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076897 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55dc373-49c6-4b05-a945-79614dc282d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076930 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gqt4\" (UniqueName: \"kubernetes.io/projected/9f598bfe-913e-4236-b3c5-78268f38396c-kube-api-access-6gqt4\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076939 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4xn8\" (UniqueName: \"kubernetes.io/projected/50a11a60-476d-48af-9ff9-b3d9841e6260-kube-api-access-h4xn8\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076949 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076958 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4d8t\" (UniqueName: \"kubernetes.io/projected/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-kube-api-access-h4d8t\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076967 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076979 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076988 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f598bfe-913e-4236-b3c5-78268f38396c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.076996 4766 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdbd0f5d-e6fb-4960-a928-7a5dcc399239-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.115776 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50a11a60-476d-48af-9ff9-b3d9841e6260" (UID: "50a11a60-476d-48af-9ff9-b3d9841e6260"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.129109 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rwhkx"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.181220 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a11a60-476d-48af-9ff9-b3d9841e6260-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qct46" event={"ID":"9f598bfe-913e-4236-b3c5-78268f38396c","Type":"ContainerDied","Data":"e4ade6f221dc5ead87adec26ae126b386fc4d9600ec068ed3a99f86aa9f21eef"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793835 4766 scope.go:117] "RemoveContainer" containerID="4fedda1f3608f9c6b64edb78a08731aa0ddac6e0535fa53504800f729c59836a" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.793723 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qct46" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.796116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.796115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wcmvb" event={"ID":"cdbd0f5d-e6fb-4960-a928-7a5dcc399239","Type":"ContainerDied","Data":"f1bcfef40c047ee2d486510556be4c02c15197feb65c844e1b250852a3541990"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.799325 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-969pn" event={"ID":"f55dc373-49c6-4b05-a945-79614dc282d8","Type":"ContainerDied","Data":"89ef9d87bc4ca6e14617c5d57a66c8f3479be224d2f0014eefd70f2deeb130e1"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.799393 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-969pn" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.810919 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfpqw" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.810911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfpqw" event={"ID":"50a11a60-476d-48af-9ff9-b3d9841e6260","Type":"ContainerDied","Data":"56a4698fa29d8b3f31ac2d170f28bf29651c60264c984a5bcb461ab8477202c2"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" event={"ID":"2b001665-9e64-4f29-b35f-5f702206ae07","Type":"ContainerStarted","Data":"64f0e72481e287d2859faa639293ca26fd2e424e6fafde2e1eff36e2e5d8eae7"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" event={"ID":"2b001665-9e64-4f29-b35f-5f702206ae07","Type":"ContainerStarted","Data":"ad025bb4c60cac767acc5ddcf4b0302bb14775160c22b853d71e08d2f4a26feb"} Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813938 4766 scope.go:117] "RemoveContainer" containerID="543dbb0915881eb0de3020763b26d25afd72cbd7d1477df0b515d8849845cb0f" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.813947 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.832941 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.843390 4766 scope.go:117] "RemoveContainer" containerID="ec0ce517870aafe9b0b52ea02febd0b91432faa6102be5a4c960f4e6d47e8c20" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.853451 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.859510 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qct46"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.862095 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.865416 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wcmvb"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.867008 4766 scope.go:117] "RemoveContainer" containerID="b6e9379c9cd40d8f1beccde490be8ea8ec9eabe93e20ab939489087d2f14c434" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.870997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.885167 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-969pn"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.894004 4766 scope.go:117] "RemoveContainer" containerID="3c17de7d9c8ff462aee20d6633666e6e8afb94763702757ff150c69ee7ee111d" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.903321 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rwhkx" podStartSLOduration=1.903305022 podStartE2EDuration="1.903305022s" podCreationTimestamp="2026-01-30 16:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:40.902520621 +0000 UTC m=+315.540477977" watchObservedRunningTime="2026-01-30 16:27:40.903305022 +0000 UTC m=+315.541262388" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.912039 4766 scope.go:117] "RemoveContainer" containerID="18913b64598e390c8024ffdd2beaf8bfc1733f79b6e172d846d92e917392a4f2" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.918296 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.923384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hfpqw"] Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.931547 4766 scope.go:117] "RemoveContainer" containerID="01a6df12be346d87bb230eb7d19417e7d00327a79babb5d36b9be297a80a0970" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.958951 4766 scope.go:117] "RemoveContainer" containerID="be886e6bce28f07837bd1e5ff07fcae13b22456b433498c736f7be7e1ef836d8" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.972273 4766 scope.go:117] "RemoveContainer" containerID="56845faa6a2886e9495f7e3b56129ef294daca0a466636b522f89f4aba889fd6" Jan 30 16:27:40 crc kubenswrapper[4766]: I0130 16:27:40.992236 4766 scope.go:117] "RemoveContainer" containerID="6326cb8b7c494cb94cd7ca4aaa3a58767027c93625175f1ed1562feb35a32331" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.045694 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" path="/var/lib/kubelet/pods/50a11a60-476d-48af-9ff9-b3d9841e6260/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.046941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" path="/var/lib/kubelet/pods/9f598bfe-913e-4236-b3c5-78268f38396c/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.047695 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" path="/var/lib/kubelet/pods/cdbd0f5d-e6fb-4960-a928-7a5dcc399239/volumes" Jan 30 16:27:42 crc kubenswrapper[4766]: I0130 16:27:42.048719 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" path="/var/lib/kubelet/pods/f55dc373-49c6-4b05-a945-79614dc282d8/volumes" Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.365910 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.366456 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" containerID="cri-o://d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" gracePeriod=30 Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.848387 4766 generic.go:334] "Generic (PLEG): container finished" podID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerID="d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" exitCode=0 Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.848472 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerDied","Data":"d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a"} Jan 30 16:27:45 crc kubenswrapper[4766]: I0130 16:27:45.968142 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057358 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.058797 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059002 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config" (OuterVolumeSpecName: "config") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059033 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.057522 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059319 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") pod \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\" (UID: \"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318\") " Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059552 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059569 4766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.059578 4766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.066282 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd" (OuterVolumeSpecName: "kube-api-access-27tgd") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "kube-api-access-27tgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.070102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" (UID: "9bff95c9-c6a1-4ee5-ac7a-68dac2da0318"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.161065 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tgd\" (UniqueName: \"kubernetes.io/projected/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-kube-api-access-27tgd\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.161112 4766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.855635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" event={"ID":"9bff95c9-c6a1-4ee5-ac7a-68dac2da0318","Type":"ContainerDied","Data":"e58fbe7996a8ff003a2b6f7f74a31d396be00251f43d6d9bee24d2bba733d54a"} Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.855710 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d55469fcf-485sj" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.856044 4766 scope.go:117] "RemoveContainer" containerID="d311fa8670dceb7e4a31251ca8e6a5715eb8dab77a0e0a77753b1ca24a74735a" Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.886417 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:46 crc kubenswrapper[4766]: I0130 16:27:46.893857 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d55469fcf-485sj"] Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.382837 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383082 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383094 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383105 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383110 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383116 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383131 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383137 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383146 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383151 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383159 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383165 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383172 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383203 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383209 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383220 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383226 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383232 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383238 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383247 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383252 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383266 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-utilities" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383275 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383281 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="extract-content" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383289 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383295 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: E0130 16:27:47.383302 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383308 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383389 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383397 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a11a60-476d-48af-9ff9-b3d9841e6260" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383403 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" containerName="controller-manager" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383412 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55dc373-49c6-4b05-a945-79614dc282d8" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383426 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f598bfe-913e-4236-b3c5-78268f38396c" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383434 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4a36f6-21fe-4374-adaf-4505d59ce4c5" containerName="registry-server" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.383825 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.385876 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.386794 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.387714 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.388912 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.389089 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.390320 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.401719 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.406123 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.479990 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.480046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.480247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.581589 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.582831 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-client-ca\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.583516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-config\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.583582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/faac4a21-a6d9-49cb-aa50-a78811180a26-proxy-ca-bundles\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.595437 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faac4a21-a6d9-49cb-aa50-a78811180a26-serving-cert\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.602451 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm4c\" (UniqueName: \"kubernetes.io/projected/faac4a21-a6d9-49cb-aa50-a78811180a26-kube-api-access-btm4c\") pod \"controller-manager-9f999584f-p9fmc\" (UID: \"faac4a21-a6d9-49cb-aa50-a78811180a26\") " pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:47 crc kubenswrapper[4766]: I0130 16:27:47.703753 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.048556 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bff95c9-c6a1-4ee5-ac7a-68dac2da0318" path="/var/lib/kubelet/pods/9bff95c9-c6a1-4ee5-ac7a-68dac2da0318/volumes" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.147367 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9f999584f-p9fmc"] Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.873392 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" event={"ID":"faac4a21-a6d9-49cb-aa50-a78811180a26","Type":"ContainerStarted","Data":"a1385a0ef21788a01b4db812c90b4b2ef2d42befd912556df8c69aa87dcfcd7c"} Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.874069 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.874085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" event={"ID":"faac4a21-a6d9-49cb-aa50-a78811180a26","Type":"ContainerStarted","Data":"24355eb30bec46eb83e3211c8bae21fc355f9439589efa7eae30cf23a54a185e"} Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.879095 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" Jan 30 16:27:48 crc kubenswrapper[4766]: I0130 16:27:48.892489 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9f999584f-p9fmc" podStartSLOduration=3.892471524 podStartE2EDuration="3.892471524s" podCreationTimestamp="2026-01-30 16:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:27:48.891895229 +0000 UTC m=+323.529852585" watchObservedRunningTime="2026-01-30 16:27:48.892471524 +0000 UTC m=+323.530428870" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.112086 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.112987 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd0f5d-e6fb-4960-a928-7a5dcc399239" containerName="marketplace-operator" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.113387 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.135556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193829 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193955 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.193984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194015 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.194277 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.212142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296429 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296490 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296595 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.296694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.297135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e691f63b-e081-4e1f-9d9e-3af3af8749bc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.298038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-trusted-ca\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.298064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-certificates\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.305141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e691f63b-e081-4e1f-9d9e-3af3af8749bc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.305377 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-registry-tls\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.313996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scsc\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-kube-api-access-7scsc\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.317781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e691f63b-e081-4e1f-9d9e-3af3af8749bc-bound-sa-token\") pod \"image-registry-66df7c8f76-j67vg\" (UID: \"e691f63b-e081-4e1f-9d9e-3af3af8749bc\") " pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.429275 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.861099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j67vg"] Jan 30 16:28:02 crc kubenswrapper[4766]: I0130 16:28:02.946004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" event={"ID":"e691f63b-e081-4e1f-9d9e-3af3af8749bc","Type":"ContainerStarted","Data":"c9b68069e30b45190858f72e51693a4243d4226fd4159d3db90ecdd90bd4cb0c"} Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.952557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" event={"ID":"e691f63b-e081-4e1f-9d9e-3af3af8749bc","Type":"ContainerStarted","Data":"cd3b4983f2b0eb75ee718357bedf79fc7950fa3ab7cebc59df1905e5af5cfa67"} Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.952964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:03 crc kubenswrapper[4766]: I0130 16:28:03.986513 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" podStartSLOduration=1.986491587 podStartE2EDuration="1.986491587s" podCreationTimestamp="2026-01-30 16:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:28:03.986432216 +0000 UTC m=+338.624389582" watchObservedRunningTime="2026-01-30 16:28:03.986491587 +0000 UTC m=+338.624448933" Jan 30 16:28:22 crc kubenswrapper[4766]: I0130 16:28:22.434378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-j67vg" Jan 30 16:28:22 crc kubenswrapper[4766]: I0130 16:28:22.484517 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.279859 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.281799 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.283934 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.289369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.373993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.469196 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.470487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.472510 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475414 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-utilities\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.475911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-catalog-content\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.479385 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.497735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl7wl\" (UniqueName: \"kubernetes.io/projected/45931cc3-9fdc-43a0-bc52-7ac389c4f75b-kube-api-access-hl7wl\") pod \"community-operators-9s94z\" (UID: \"45931cc3-9fdc-43a0-bc52-7ac389c4f75b\") " pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577263 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577369 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.577430 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.651947 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678274 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.678800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.679065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.701984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"certified-operators-sqx4x\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:28 crc kubenswrapper[4766]: I0130 16:28:28.783614 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.062151 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s94z"] Jan 30 16:28:29 crc kubenswrapper[4766]: W0130 16:28:29.064553 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45931cc3_9fdc_43a0_bc52_7ac389c4f75b.slice/crio-70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e WatchSource:0}: Error finding container 70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e: Status 404 returned error can't find the container with id 70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.078755 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerStarted","Data":"70905ce6d0cf791bf32733efc45389df679fed0c62b0a32da5a573e02225b27e"} Jan 30 16:28:29 crc kubenswrapper[4766]: I0130 16:28:29.194068 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.085699 4766 generic.go:334] "Generic (PLEG): container finished" podID="45931cc3-9fdc-43a0-bc52-7ac389c4f75b" containerID="0b941cc6b7547eb39ab2f29096c216bd65a342eedf24fba721f6d7abced9eeb3" exitCode=0 Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.085784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerDied","Data":"0b941cc6b7547eb39ab2f29096c216bd65a342eedf24fba721f6d7abced9eeb3"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093199 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" exitCode=0 Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.093288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerStarted","Data":"4e2e822728d72b043828d2c376fae8de09ee8b30107e67f666204b30101944fd"} Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.673811 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.674852 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.690938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.691773 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.705224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.806848 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807024 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-utilities\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.807483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf71edb-8510-412d-95bd-028b90482ad1-catalog-content\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.827143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdrq\" (UniqueName: \"kubernetes.io/projected/5bf71edb-8510-412d-95bd-028b90482ad1-kube-api-access-tvdrq\") pod \"redhat-marketplace-d8wb8\" (UID: \"5bf71edb-8510-412d-95bd-028b90482ad1\") " pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.873197 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.874425 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.879405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:28:30 crc kubenswrapper[4766]: I0130 16:28:30.886267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.009409 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013348 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.013436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.115631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.116381 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.116656 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.137826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"redhat-operators-ck55d\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.195501 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.540649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8wb8"] Jan 30 16:28:31 crc kubenswrapper[4766]: W0130 16:28:31.548246 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bf71edb_8510_412d_95bd_028b90482ad1.slice/crio-a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e WatchSource:0}: Error finding container a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e: Status 404 returned error can't find the container with id a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e Jan 30 16:28:31 crc kubenswrapper[4766]: I0130 16:28:31.652522 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 16:28:31 crc kubenswrapper[4766]: W0130 16:28:31.716267 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode775d594_6680_4e4a_8b1f_01f3a0738015.slice/crio-f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425 WatchSource:0}: Error finding container f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425: Status 404 returned error can't find the container with id f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.110340 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.110415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119599 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119670 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.119695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerStarted","Data":"f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.125091 4766 generic.go:334] "Generic (PLEG): container finished" podID="45931cc3-9fdc-43a0-bc52-7ac389c4f75b" containerID="51511cdb8a77cd476c1f4436902e5eace1abf72deb2e557361fd2a2085bea65f" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.125582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerDied","Data":"51511cdb8a77cd476c1f4436902e5eace1abf72deb2e557361fd2a2085bea65f"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140163 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bf71edb-8510-412d-95bd-028b90482ad1" containerID="ab9f45c4bdf83a02544aa35f32e53d8adf89cd399185ca73d184784e819b21ee" exitCode=0 Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140258 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerDied","Data":"ab9f45c4bdf83a02544aa35f32e53d8adf89cd399185ca73d184784e819b21ee"} Jan 30 16:28:32 crc kubenswrapper[4766]: I0130 16:28:32.140330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"a6d3fbc36feeddeade12a2f5969134d1d73fa81cc9158ce4caede29cc936669e"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.149843 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerStarted","Data":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.152423 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s94z" event={"ID":"45931cc3-9fdc-43a0-bc52-7ac389c4f75b","Type":"ContainerStarted","Data":"8e87bb0275b753b25ae6e95f27a6de8c9a8bf65607aa22b6921c55a7c79624c1"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.154927 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f"} Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.172433 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqx4x" podStartSLOduration=2.428133548 podStartE2EDuration="5.172410416s" podCreationTimestamp="2026-01-30 16:28:28 +0000 UTC" firstStartedPulling="2026-01-30 16:28:30.094704227 +0000 UTC m=+364.732661563" lastFinishedPulling="2026-01-30 16:28:32.838981085 +0000 UTC m=+367.476938431" observedRunningTime="2026-01-30 16:28:33.167239642 +0000 UTC m=+367.805197018" watchObservedRunningTime="2026-01-30 16:28:33.172410416 +0000 UTC m=+367.810367772" Jan 30 16:28:33 crc kubenswrapper[4766]: I0130 16:28:33.217285 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9s94z" podStartSLOduration=2.364731378 podStartE2EDuration="5.217260741s" podCreationTimestamp="2026-01-30 16:28:28 +0000 UTC" firstStartedPulling="2026-01-30 16:28:30.091503539 +0000 UTC m=+364.729460885" lastFinishedPulling="2026-01-30 16:28:32.944032902 +0000 UTC m=+367.581990248" observedRunningTime="2026-01-30 16:28:33.216606593 +0000 UTC m=+367.854563939" watchObservedRunningTime="2026-01-30 16:28:33.217260741 +0000 UTC m=+367.855218087" Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.162531 4766 generic.go:334] "Generic (PLEG): container finished" podID="5bf71edb-8510-412d-95bd-028b90482ad1" containerID="0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f" exitCode=0 Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.162589 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerDied","Data":"0a2670f3cd94e5d451caa6d6ce4606c417090d07ea0096560b54d6d04adad77f"} Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.166842 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" exitCode=0 Jan 30 16:28:34 crc kubenswrapper[4766]: I0130 16:28:34.167700 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.176783 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerStarted","Data":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.180562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8wb8" event={"ID":"5bf71edb-8510-412d-95bd-028b90482ad1","Type":"ContainerStarted","Data":"d0169081296ca2e47a66457159417ebeaa1fe9531b78fa8baee181223da03c4d"} Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.197851 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ck55d" podStartSLOduration=2.7345860760000003 podStartE2EDuration="5.197833858s" podCreationTimestamp="2026-01-30 16:28:30 +0000 UTC" firstStartedPulling="2026-01-30 16:28:32.12264678 +0000 UTC m=+366.760604126" lastFinishedPulling="2026-01-30 16:28:34.585894572 +0000 UTC m=+369.223851908" observedRunningTime="2026-01-30 16:28:35.193997171 +0000 UTC m=+369.831954547" watchObservedRunningTime="2026-01-30 16:28:35.197833858 +0000 UTC m=+369.835791204" Jan 30 16:28:35 crc kubenswrapper[4766]: I0130 16:28:35.215957 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d8wb8" podStartSLOduration=2.7636568329999998 podStartE2EDuration="5.215936581s" podCreationTimestamp="2026-01-30 16:28:30 +0000 UTC" firstStartedPulling="2026-01-30 16:28:32.143651613 +0000 UTC m=+366.781608959" lastFinishedPulling="2026-01-30 16:28:34.595931361 +0000 UTC m=+369.233888707" observedRunningTime="2026-01-30 16:28:35.21050546 +0000 UTC m=+369.848462816" watchObservedRunningTime="2026-01-30 16:28:35.215936581 +0000 UTC m=+369.853893927" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.652691 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.653225 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.700687 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.784140 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.784228 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:38 crc kubenswrapper[4766]: I0130 16:28:38.826119 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.045723 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.045785 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.245961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 16:28:39 crc kubenswrapper[4766]: I0130 16:28:39.246020 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9s94z" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.009812 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.010247 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.054051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.196619 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.196685 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.235250 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.263964 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d8wb8" Jan 30 16:28:41 crc kubenswrapper[4766]: I0130 16:28:41.281795 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.537561 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" containerID="cri-o://78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" gracePeriod=30 Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.589798 4766 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9nn5q container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.589916 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 30 16:28:47 crc kubenswrapper[4766]: I0130 16:28:47.914695 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.058997 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059038 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059064 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059116 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059222 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.059267 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") pod \"97631abe-0d99-4f69-b208-4da9d19a8400\" (UID: \"97631abe-0d99-4f69-b208-4da9d19a8400\") " Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.060366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.061017 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.071755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252" (OuterVolumeSpecName: "kube-api-access-79252") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "kube-api-access-79252". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.072129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.072690 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.073897 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.074333 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.075488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "97631abe-0d99-4f69-b208-4da9d19a8400" (UID: "97631abe-0d99-4f69-b208-4da9d19a8400"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161317 4766 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161350 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161359 4766 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161368 4766 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97631abe-0d99-4f69-b208-4da9d19a8400-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161379 4766 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97631abe-0d99-4f69-b208-4da9d19a8400-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79252\" (UniqueName: \"kubernetes.io/projected/97631abe-0d99-4f69-b208-4da9d19a8400-kube-api-access-79252\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.161394 4766 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97631abe-0d99-4f69-b208-4da9d19a8400-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252169 4766 generic.go:334] "Generic (PLEG): container finished" podID="97631abe-0d99-4f69-b208-4da9d19a8400" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" exitCode=0 Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252236 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerDied","Data":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252263 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" event={"ID":"97631abe-0d99-4f69-b208-4da9d19a8400","Type":"ContainerDied","Data":"8607ddfed85f0737d38a8c68a75c871fb7626f9536fec8516b4240081fc47421"} Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252270 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nn5q" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.252280 4766 scope.go:117] "RemoveContainer" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.269058 4766 scope.go:117] "RemoveContainer" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: E0130 16:28:48.269659 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": container with ID starting with 78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086 not found: ID does not exist" containerID="78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.269692 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086"} err="failed to get container status \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": rpc error: code = NotFound desc = could not find container \"78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086\": container with ID starting with 78ed4a5085e5d869bfcb80a811cd6fcf0f153ceead1030384c7d491fb2b98086 not found: ID does not exist" Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.283351 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:48 crc kubenswrapper[4766]: I0130 16:28:48.288093 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nn5q"] Jan 30 16:28:50 crc kubenswrapper[4766]: I0130 16:28:50.048218 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" path="/var/lib/kubelet/pods/97631abe-0d99-4f69-b208-4da9d19a8400/volumes" Jan 30 16:29:09 crc kubenswrapper[4766]: I0130 16:29:09.045846 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:29:09 crc kubenswrapper[4766]: I0130 16:29:09.046553 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.045778 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046422 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046466 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.046997 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.047051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" gracePeriod=600 Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530236 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" exitCode=0 Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f"} Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.530994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} Jan 30 16:29:39 crc kubenswrapper[4766]: I0130 16:29:39.531037 4766 scope.go:117] "RemoveContainer" containerID="183f20bedf7df01c9272786fadedafb6b3e3e9111300658b263f00ec10891823" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.174797 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:00 crc kubenswrapper[4766]: E0130 16:30:00.175799 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.175819 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.175943 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="97631abe-0d99-4f69-b208-4da9d19a8400" containerName="registry" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.176535 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.178882 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.179840 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.183113 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.218921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.218996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.219046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.319958 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.321800 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.334222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.341652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"collect-profiles-29496510-glrms\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.506159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:00 crc kubenswrapper[4766]: I0130 16:30:00.690452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657582 4766 generic.go:334] "Generic (PLEG): container finished" podID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerID="add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e" exitCode=0 Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657683 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerDied","Data":"add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e"} Jan 30 16:30:01 crc kubenswrapper[4766]: I0130 16:30:01.657969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerStarted","Data":"e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb"} Jan 30 16:30:02 crc kubenswrapper[4766]: I0130 16:30:02.896258 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049602 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.049707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") pod \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\" (UID: \"aabaaf93-f51e-4847-b39a-8ecccc43f8d4\") " Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.050331 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.055631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw" (OuterVolumeSpecName: "kube-api-access-s4qsw") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "kube-api-access-s4qsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.055689 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aabaaf93-f51e-4847-b39a-8ecccc43f8d4" (UID: "aabaaf93-f51e-4847-b39a-8ecccc43f8d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151583 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4qsw\" (UniqueName: \"kubernetes.io/projected/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-kube-api-access-s4qsw\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151918 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.151933 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aabaaf93-f51e-4847-b39a-8ecccc43f8d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" event={"ID":"aabaaf93-f51e-4847-b39a-8ecccc43f8d4","Type":"ContainerDied","Data":"e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb"} Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670311 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e688a33fe70e771eac1b1a8dca3c2b0e939682e5b9a2a820bafb347a8c213deb" Jan 30 16:30:03 crc kubenswrapper[4766]: I0130 16:30:03.670349 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms" Jan 30 16:31:39 crc kubenswrapper[4766]: I0130 16:31:39.045875 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:31:39 crc kubenswrapper[4766]: I0130 16:31:39.046409 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:09 crc kubenswrapper[4766]: I0130 16:32:09.046254 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:32:09 crc kubenswrapper[4766]: I0130 16:32:09.046967 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.045567 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.046234 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.046279 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.047583 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.047696 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" gracePeriod=600 Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.707775 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" exitCode=0 Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.707857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f"} Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.708219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} Jan 30 16:32:39 crc kubenswrapper[4766]: I0130 16:32:39.708264 4766 scope.go:117] "RemoveContainer" containerID="a61da9bc846bcef2fd5085fc646835d633689f5537ff5019224103cb78b8173f" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.774193 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:10 crc kubenswrapper[4766]: E0130 16:34:10.774964 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.774979 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.775073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" containerName="collect-profiles" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.775446 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777750 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777860 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.777972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.780253 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.790492 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914438 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:10 crc kubenswrapper[4766]: I0130 16:34:10.914671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.016729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.017154 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.038469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"crc-storage-crc-mxw77\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.095581 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.279045 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 16:34:11 crc kubenswrapper[4766]: W0130 16:34:11.286581 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad5692e_34c5_4e32_ba96_cd5e6e617c62.slice/crio-41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c WatchSource:0}: Error finding container 41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c: Status 404 returned error can't find the container with id 41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c Jan 30 16:34:11 crc kubenswrapper[4766]: I0130 16:34:11.288932 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:34:12 crc kubenswrapper[4766]: I0130 16:34:12.185200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerStarted","Data":"41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c"} Jan 30 16:34:13 crc kubenswrapper[4766]: I0130 16:34:13.192398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerDied","Data":"403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304"} Jan 30 16:34:13 crc kubenswrapper[4766]: I0130 16:34:13.193345 4766 generic.go:334] "Generic (PLEG): container finished" podID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerID="403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304" exitCode=0 Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.398672 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558645 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.558822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") pod \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\" (UID: \"3ad5692e-34c5-4e32-ba96-cd5e6e617c62\") " Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.559073 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.565829 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9" (OuterVolumeSpecName: "kube-api-access-mtjq9") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "kube-api-access-mtjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.574883 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3ad5692e-34c5-4e32-ba96-cd5e6e617c62" (UID: "3ad5692e-34c5-4e32-ba96-cd5e6e617c62"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659863 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtjq9\" (UniqueName: \"kubernetes.io/projected/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-kube-api-access-mtjq9\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659907 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4766]: I0130 16:34:14.659919 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ad5692e-34c5-4e32-ba96-cd5e6e617c62-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207240 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mxw77" event={"ID":"3ad5692e-34c5-4e32-ba96-cd5e6e617c62","Type":"ContainerDied","Data":"41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c"} Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207291 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41aa3cc9e83c071b20feb20af7cd9beb2280e38cda41f53df29d32c582f72e3c" Jan 30 16:34:15 crc kubenswrapper[4766]: I0130 16:34:15.207386 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mxw77" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.021607 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023327 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" containerID="cri-o://eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023372 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" containerID="cri-o://9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023452 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023560 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" containerID="cri-o://5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023612 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" containerID="cri-o://03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023632 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" containerID="cri-o://fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.023651 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" containerID="cri-o://041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.098675 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" containerID="cri-o://647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" gracePeriod=30 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.223625 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.225384 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.225898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226332 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" exitCode=0 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226358 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" exitCode=0 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226367 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" exitCode=143 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226376 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" exitCode=143 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226419 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.226462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.231346 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232752 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/1.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232799 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a74bc5e-af98-4849-820c-7056caabc485" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" exitCode=2 Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerDied","Data":"166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c"} Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.232870 4766 scope.go:117] "RemoveContainer" containerID="5cca75d6ff61e1e073104559f6fcac5d76919a1e445da3f2d6fe271d1e0e4082" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.233444 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.233610 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.390064 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.393763 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.395643 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.396154 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413366 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413412 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413494 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413525 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413575 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413764 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.413899 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414039 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414069 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.414266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") pod \"d6a299e8-188d-4777-bb82-a0994feabcff\" (UID: \"d6a299e8-188d-4777-bb82-a0994feabcff\") " Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415505 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket" (OuterVolumeSpecName: "log-socket") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415526 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415558 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash" (OuterVolumeSpecName: "host-slash") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415647 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415682 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.415739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log" (OuterVolumeSpecName: "node-log") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.416411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.416504 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417245 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.417438 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.419921 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh" (OuterVolumeSpecName: "kube-api-access-4psqh") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "kube-api-access-4psqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.420106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.428472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d6a299e8-188d-4777-bb82-a0994feabcff" (UID: "d6a299e8-188d-4777-bb82-a0994feabcff"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.460986 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44h4c"] Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461223 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461237 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461245 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461251 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461264 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461270 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461277 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461284 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461293 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461299 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kubecfg-setup" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461315 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kubecfg-setup" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461323 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461330 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461339 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461345 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461355 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461361 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461374 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461381 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461387 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461395 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461400 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461480 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461488 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-acl-logging" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461497 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461504 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-node" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461512 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461523 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovn-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461530 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461536 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" containerName="storage" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461545 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="sbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461552 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="nbdb" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461559 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="northd" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461683 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: E0130 16:34:17.461696 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461702 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.461807 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" containerName="ovnkube-controller" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.463422 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516248 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516334 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516484 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516518 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516783 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.516977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517449 4766 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517494 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517512 4766 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517524 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517558 4766 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517665 4766 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517678 4766 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517694 4766 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517709 4766 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517722 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517736 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517785 4766 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517798 4766 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517809 4766 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517822 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d6a299e8-188d-4777-bb82-a0994feabcff-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517836 4766 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517848 4766 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517859 4766 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d6a299e8-188d-4777-bb82-a0994feabcff-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517870 4766 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d6a299e8-188d-4777-bb82-a0994feabcff-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.517881 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4psqh\" (UniqueName: \"kubernetes.io/projected/d6a299e8-188d-4777-bb82-a0994feabcff-kube-api-access-4psqh\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619501 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-slash\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619975 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.619910 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620075 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620113 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-netns\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620145 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620162 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-systemd-units\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620208 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-kubelet\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620232 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-var-lib-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620270 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-node-log\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620291 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620326 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-ovn\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-bin\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-systemd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-run-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-log-socket\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-cni-netd\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.620991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-config\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621054 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-host-run-ovn-kubernetes\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c86b5492-8fad-4730-9587-79439536dfee-etc-openvswitch\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-env-overrides\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.621418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c86b5492-8fad-4730-9587-79439536dfee-ovnkube-script-lib\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.625958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c86b5492-8fad-4730-9587-79439536dfee-ovn-node-metrics-cert\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.639247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxqb\" (UniqueName: \"kubernetes.io/projected/c86b5492-8fad-4730-9587-79439536dfee-kube-api-access-nfxqb\") pod \"ovnkube-node-44h4c\" (UID: \"c86b5492-8fad-4730-9587-79439536dfee\") " pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:17 crc kubenswrapper[4766]: I0130 16:34:17.778658 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.242430 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovnkube-controller/3.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.244878 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-acl-logging/0.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245365 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-54ngm_d6a299e8-188d-4777-bb82-a0994feabcff/ovn-controller/0.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245699 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245726 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245736 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245770 4766 generic.go:334] "Generic (PLEG): container finished" podID="d6a299e8-188d-4777-bb82-a0994feabcff" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245941 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" event={"ID":"d6a299e8-188d-4777-bb82-a0994feabcff","Type":"ContainerDied","Data":"b7c7571b036dc1cbf0576f5638a00f9530f0e7ad9d69b4b12af59327bef5efe3"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.245969 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.246071 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-54ngm" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.248945 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250826 4766 generic.go:334] "Generic (PLEG): container finished" podID="c86b5492-8fad-4730-9587-79439536dfee" containerID="9f62ed3f25bc6771847095b8e8045bffd473ce8376e8f6e634c0ed562f4703cf" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerDied","Data":"9f62ed3f25bc6771847095b8e8045bffd473ce8376e8f6e634c0ed562f4703cf"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.250909 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"75a1bcc6b31f2cef694000185b132df1bc20b86ae4a75a382758838626d5d09d"} Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.264070 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.296388 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.311254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.319312 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-54ngm"] Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.337231 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.355291 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.368040 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.386402 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.405450 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.419447 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.434165 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.455686 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.456327 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.456465 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.456505 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457164 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457202 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457219 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457571 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457609 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457640 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.457926 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457954 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.457971 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458211 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458264 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458284 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458605 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458638 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.458662 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.458985 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459008 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459023 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459312 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459360 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459378 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459666 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459696 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459731 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: E0130 16:34:18.459965 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.459995 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460013 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460322 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460350 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460604 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.460632 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461004 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461044 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461382 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461428 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461716 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.461735 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462016 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462043 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462294 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462314 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462593 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462611 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462870 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.462886 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463151 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463167 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463443 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463461 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463794 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.463819 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464458 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464496 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464800 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.464824 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465161 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465193 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465489 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465508 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465766 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.465783 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466006 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466024 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466234 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466256 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466508 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466530 4766 scope.go:117] "RemoveContainer" containerID="647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b"} err="failed to get container status \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": rpc error: code = NotFound desc = could not find container \"647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b\": container with ID starting with 647d98f47c2a42e161b8c7c39f2520f193644bfbe6a04439449b24171986cc9b not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466736 4766 scope.go:117] "RemoveContainer" containerID="18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.466995 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75"} err="failed to get container status \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": rpc error: code = NotFound desc = could not find container \"18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75\": container with ID starting with 18d9b21675747867db57caacb92c4bf83579e8cabdd73ca5d6bd732d7f837c75 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467019 4766 scope.go:117] "RemoveContainer" containerID="03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467278 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd"} err="failed to get container status \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": rpc error: code = NotFound desc = could not find container \"03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd\": container with ID starting with 03e745a08485604cf4d83b144c0fb2b6073a05dd8ca4393382c04680cedee5bd not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467302 4766 scope.go:117] "RemoveContainer" containerID="9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467576 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5"} err="failed to get container status \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": rpc error: code = NotFound desc = could not find container \"9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5\": container with ID starting with 9f2162a9fcf0b25843a42811a2bd488eb3e120a5cc45298461eccc66e52548c5 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467602 4766 scope.go:117] "RemoveContainer" containerID="5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467835 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044"} err="failed to get container status \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": rpc error: code = NotFound desc = could not find container \"5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044\": container with ID starting with 5706950c765db498b7aba67853762653187eb1c09b7cd3d282a38f2acd4d6044 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.467861 4766 scope.go:117] "RemoveContainer" containerID="3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468098 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441"} err="failed to get container status \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": rpc error: code = NotFound desc = could not find container \"3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441\": container with ID starting with 3d2dcaab48d56aa3be408c766104d11111240ef6f0f19c4c76b9b31bba81c441 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468147 4766 scope.go:117] "RemoveContainer" containerID="fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468476 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01"} err="failed to get container status \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": rpc error: code = NotFound desc = could not find container \"fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01\": container with ID starting with fd8cf873241e4dfa4e1d7d67497081e4da9347a262212ec71d2050f5af74bd01 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468522 4766 scope.go:117] "RemoveContainer" containerID="041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468851 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6"} err="failed to get container status \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": rpc error: code = NotFound desc = could not find container \"041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6\": container with ID starting with 041dfe599a793b1a9e23496ff653759df1cbe25f2718d9e9f9505b66e307d8d6 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.468874 4766 scope.go:117] "RemoveContainer" containerID="eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469244 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78"} err="failed to get container status \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": rpc error: code = NotFound desc = could not find container \"eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78\": container with ID starting with eb4d89243b0d85bf4a8bc6ed3965395dec7aad1fb81a50cc279c87ba15363d78 not found: ID does not exist" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469272 4766 scope.go:117] "RemoveContainer" containerID="458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1" Jan 30 16:34:18 crc kubenswrapper[4766]: I0130 16:34:18.469536 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1"} err="failed to get container status \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": rpc error: code = NotFound desc = could not find container \"458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1\": container with ID starting with 458891e9e54f0953079d9f8019491a85fc2c12c21a886990e1e7a9ab142d25f1 not found: ID does not exist" Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"22d6b7400859e2ed0cbf6a8a7f9fc829406089f0538e65bb7577f5c435edea46"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"506ad4c7a04f42ef0a5732b1f006296851de0cb2ce967eb0300b530c1b668103"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"82688917a0460a29cd019cc88f9714be1657f7ee18dbb117f81bcfecadb3f846"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260858 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"0cbc88b47c4e6d0abaef77ffb45c7a93fa376bd27e8926ba4ae530c6e74b7cc6"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"f721586498011f5a3be49997817d6abf23cf3be8d4c432796851c02d42295bb9"} Jan 30 16:34:19 crc kubenswrapper[4766]: I0130 16:34:19.260884 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"778d81e8098661d61bdb5e56b5d01eaa521ef7726e0fff5e12d45cdb1cded618"} Jan 30 16:34:20 crc kubenswrapper[4766]: I0130 16:34:20.047716 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a299e8-188d-4777-bb82-a0994feabcff" path="/var/lib/kubelet/pods/d6a299e8-188d-4777-bb82-a0994feabcff/volumes" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.270446 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.271750 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.273938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.275992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"79ae7bcae501bb7a73d5732ca84bffb5c97991491acceb963871684414c91b5d"} Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.367956 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.368022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.368043 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469534 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469617 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.469645 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.470072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.470195 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.488827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: I0130 16:34:21.585021 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608415 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608535 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608563 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:21 crc kubenswrapper[4766]: E0130 16:34:21.608627 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(0f9cf759cf4fcbc4263b721c9fab0f6df77c599e3e2d76a9648ff5703e475541): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.124577 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.125572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.126049 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159736 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159859 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.159896 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:24 crc kubenswrapper[4766]: E0130 16:34:24.160458 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(993f8bef9586636e8b11ad0b8e6aabd003df1d5b1991bdd49436cd84887a9787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" event={"ID":"c86b5492-8fad-4730-9587-79439536dfee","Type":"ContainerStarted","Data":"cd14fc5594090fb492f38421457d8396d7f7543f41b2e6a77bd883e197144815"} Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309580 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.309590 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.360152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:24 crc kubenswrapper[4766]: I0130 16:34:24.388107 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" podStartSLOduration=7.388086347 podStartE2EDuration="7.388086347s" podCreationTimestamp="2026-01-30 16:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:34:24.383107149 +0000 UTC m=+719.021064505" watchObservedRunningTime="2026-01-30 16:34:24.388086347 +0000 UTC m=+719.026043693" Jan 30 16:34:25 crc kubenswrapper[4766]: I0130 16:34:25.315429 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:25 crc kubenswrapper[4766]: I0130 16:34:25.342277 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:29 crc kubenswrapper[4766]: I0130 16:34:29.039621 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:29 crc kubenswrapper[4766]: E0130 16:34:29.040069 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-l6xdr_openshift-multus(3a74bc5e-af98-4849-820c-7056caabc485)\"" pod="openshift-multus/multus-l6xdr" podUID="3a74bc5e-af98-4849-820c-7056caabc485" Jan 30 16:34:35 crc kubenswrapper[4766]: I0130 16:34:35.039242 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: I0130 16:34:35.041218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.073984 4766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074113 4766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074170 4766 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:35 crc kubenswrapper[4766]: E0130 16:34:35.074307 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace(7cde9372-207a-40f0-829b-1e0b5c662ec1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_openshift-marketplace_7cde9372-207a-40f0-829b-1e0b5c662ec1_0(9a280ebcb8971193df816f363393f0b269d17dea2fe3b1d90473d3d1f2177e39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" Jan 30 16:34:39 crc kubenswrapper[4766]: I0130 16:34:39.045071 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:34:39 crc kubenswrapper[4766]: I0130 16:34:39.045511 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.039766 4766 scope.go:117] "RemoveContainer" containerID="166e9165ba520b270882953160a98d79d10fd4c5b0fa39f8bd2fe923a3be331c" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.398688 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-l6xdr_3a74bc5e-af98-4849-820c-7056caabc485/kube-multus/2.log" Jan 30 16:34:42 crc kubenswrapper[4766]: I0130 16:34:42.398995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-l6xdr" event={"ID":"3a74bc5e-af98-4849-820c-7056caabc485","Type":"ContainerStarted","Data":"bbdc1125b2a2d4ced39fc4271a41707288f580e00c51cdac577a979d9cbd3cb4"} Jan 30 16:34:47 crc kubenswrapper[4766]: I0130 16:34:47.807292 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44h4c" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.039061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.039503 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:49 crc kubenswrapper[4766]: I0130 16:34:49.429649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn"] Jan 30 16:34:50 crc kubenswrapper[4766]: I0130 16:34:50.443801 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerStarted","Data":"744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f"} Jan 30 16:34:51 crc kubenswrapper[4766]: I0130 16:34:51.450821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerStarted","Data":"3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b"} Jan 30 16:34:52 crc kubenswrapper[4766]: I0130 16:34:52.458750 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b" exitCode=0 Jan 30 16:34:52 crc kubenswrapper[4766]: I0130 16:34:52.458823 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"3a557415adc4c1a24b5e3dd8c04efa476959a3cc0dc056e6d0722bbe885f522b"} Jan 30 16:34:54 crc kubenswrapper[4766]: E0130 16:34:54.189302 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cde9372_207a_40f0_829b_1e0b5c662ec1.slice/crio-conmon-103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:34:54 crc kubenswrapper[4766]: I0130 16:34:54.471162 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7" exitCode=0 Jan 30 16:34:54 crc kubenswrapper[4766]: I0130 16:34:54.471256 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"103dbfee5d273e9ffedd8c92d2570e1876b974a9170372fda75c5aa51f6aabe7"} Jan 30 16:34:55 crc kubenswrapper[4766]: I0130 16:34:55.478795 4766 generic.go:334] "Generic (PLEG): container finished" podID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerID="5319ff026802c0c82e451a33c953ff8cf1736dd73a18f6abd307187dd5f7cbf4" exitCode=0 Jan 30 16:34:55 crc kubenswrapper[4766]: I0130 16:34:55.478864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"5319ff026802c0c82e451a33c953ff8cf1736dd73a18f6abd307187dd5f7cbf4"} Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.677419 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.853572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854021 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854067 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") pod \"7cde9372-207a-40f0-829b-1e0b5c662ec1\" (UID: \"7cde9372-207a-40f0-829b-1e0b5c662ec1\") " Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.854495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle" (OuterVolumeSpecName: "bundle") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.864142 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9" (OuterVolumeSpecName: "kube-api-access-jgjs9") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "kube-api-access-jgjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.877965 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util" (OuterVolumeSpecName: "util") pod "7cde9372-207a-40f0-829b-1e0b5c662ec1" (UID: "7cde9372-207a-40f0-829b-1e0b5c662ec1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954691 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgjs9\" (UniqueName: \"kubernetes.io/projected/7cde9372-207a-40f0-829b-1e0b5c662ec1-kube-api-access-jgjs9\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954755 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:56 crc kubenswrapper[4766]: I0130 16:34:56.954774 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cde9372-207a-40f0-829b-1e0b5c662ec1-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.492988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" event={"ID":"7cde9372-207a-40f0-829b-1e0b5c662ec1","Type":"ContainerDied","Data":"744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f"} Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.493033 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744530273ebd16fb16a3018ffe27a238f4d8162cb092bd23625842e70001915f" Jan 30 16:34:57 crc kubenswrapper[4766]: I0130 16:34:57.493171 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn" Jan 30 16:35:02 crc kubenswrapper[4766]: I0130 16:35:02.078221 4766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.000554 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001053 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001151 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001255 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="pull" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001318 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="pull" Jan 30 16:35:03 crc kubenswrapper[4766]: E0130 16:35:03.001386 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="util" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001446 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="util" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.001616 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cde9372-207a-40f0-829b-1e0b5c662ec1" containerName="extract" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.002134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.004193 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bd9xs" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.005096 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.005564 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.015269 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.130928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.233103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.253008 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5jp\" (UniqueName: \"kubernetes.io/projected/463d1450-7318-4003-b30d-82dc9e1bec53-kube-api-access-wj5jp\") pod \"nmstate-operator-646758c888-v6mpm\" (UID: \"463d1450-7318-4003-b30d-82dc9e1bec53\") " pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.319363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" Jan 30 16:35:03 crc kubenswrapper[4766]: I0130 16:35:03.526049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v6mpm"] Jan 30 16:35:04 crc kubenswrapper[4766]: I0130 16:35:04.537652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" event={"ID":"463d1450-7318-4003-b30d-82dc9e1bec53","Type":"ContainerStarted","Data":"a79d66a79a7f5b750db23b68abf2fb93538a4dc242f33d202d6e2b5ee160328d"} Jan 30 16:35:06 crc kubenswrapper[4766]: I0130 16:35:06.550784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" event={"ID":"463d1450-7318-4003-b30d-82dc9e1bec53","Type":"ContainerStarted","Data":"98c0729a0d2909f704b9e6fc150502d78682796d964f12c2fa3b9ce73ed9c47d"} Jan 30 16:35:06 crc kubenswrapper[4766]: I0130 16:35:06.569325 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-v6mpm" podStartSLOduration=2.530632072 podStartE2EDuration="4.569307557s" podCreationTimestamp="2026-01-30 16:35:02 +0000 UTC" firstStartedPulling="2026-01-30 16:35:03.54753517 +0000 UTC m=+758.185492516" lastFinishedPulling="2026-01-30 16:35:05.586210655 +0000 UTC m=+760.224168001" observedRunningTime="2026-01-30 16:35:06.565655406 +0000 UTC m=+761.203612742" watchObservedRunningTime="2026-01-30 16:35:06.569307557 +0000 UTC m=+761.207264903" Jan 30 16:35:09 crc kubenswrapper[4766]: I0130 16:35:09.045603 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:35:09 crc kubenswrapper[4766]: I0130 16:35:09.046221 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.620014 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.621281 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.628724 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pbrwf" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.629936 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.639091 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.640077 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.642399 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.648736 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-82wxr"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.649823 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.658752 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.748271 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.748985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.749905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.749933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750084 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750286 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.750344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752518 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ftwwh" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752535 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.752559 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.758803 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851509 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851585 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851620 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851680 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: E0130 16:35:11.851824 4766 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.851979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: E0130 16:35:11.852051 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair podName:ed7e34e5-c04e-4852-b4a3-9e28fd5f960d nodeName:}" failed. No retries permitted until 2026-01-30 16:35:12.35202666 +0000 UTC m=+766.989984026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-zj7fb" (UID: "ed7e34e5-c04e-4852-b4a3-9e28fd5f960d") : secret "openshift-nmstate-webhook" not found Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852100 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-nmstate-lock\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-ovs-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.852258 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/121c0166-75c7-4f39-a07b-c89cb81d2fd8-dbus-socket\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.875928 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lcf\" (UniqueName: \"kubernetes.io/projected/46ac0f62-2413-4258-a957-35039942d0f7-kube-api-access-94lcf\") pod \"nmstate-metrics-54757c584b-wv52c\" (UID: \"46ac0f62-2413-4258-a957-35039942d0f7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.875934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6qp\" (UniqueName: \"kubernetes.io/projected/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-kube-api-access-9d6qp\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.885979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6xp\" (UniqueName: \"kubernetes.io/projected/121c0166-75c7-4f39-a07b-c89cb81d2fd8-kube-api-access-8w6xp\") pod \"nmstate-handler-82wxr\" (UID: \"121c0166-75c7-4f39-a07b-c89cb81d2fd8\") " pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.936552 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.937482 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.938317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.952030 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953742 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.953803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.954942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.967874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.977477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:11 crc kubenswrapper[4766]: I0130 16:35:11.993043 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjd6k\" (UniqueName: \"kubernetes.io/projected/d30ca6b4-bd87-4d25-92dd-f3d94410f2a3-kube-api-access-kjd6k\") pod \"nmstate-console-plugin-7754f76f8b-d2p2z\" (UID: \"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055326 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055592 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.055870 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.065978 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157487 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157542 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.157562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.158468 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-console-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.158941 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-trusted-ca-bundle\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.159153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-oauth-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.160305 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/381a1829-22f0-46b2-827d-92cc919105b8-service-ca\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.163314 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-serving-cert\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.163778 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/381a1829-22f0-46b2-827d-92cc919105b8-console-oauth-config\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.176402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxbg7\" (UniqueName: \"kubernetes.io/projected/381a1829-22f0-46b2-827d-92cc919105b8-kube-api-access-kxbg7\") pod \"console-7977978877-p7rd4\" (UID: \"381a1829-22f0-46b2-827d-92cc919105b8\") " pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.241651 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.247982 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30ca6b4_bd87_4d25_92dd_f3d94410f2a3.slice/crio-5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568 WatchSource:0}: Error finding container 5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568: Status 404 returned error can't find the container with id 5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568 Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.310159 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.360613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.362207 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wv52c"] Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.364639 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7e34e5-c04e-4852-b4a3-9e28fd5f960d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-zj7fb\" (UID: \"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.378876 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46ac0f62_2413_4258_a957_35039942d0f7.slice/crio-2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4 WatchSource:0}: Error finding container 2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4: Status 404 returned error can't find the container with id 2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4 Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.511829 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7977978877-p7rd4"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.523645 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod381a1829_22f0_46b2_827d_92cc919105b8.slice/crio-063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d WatchSource:0}: Error finding container 063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d: Status 404 returned error can't find the container with id 063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.557061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.589287 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"2f4b2feb2242c54291b881f360feabde88ca321866c6f44fecd2fb3c670d86f4"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.590904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" event={"ID":"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3","Type":"ContainerStarted","Data":"5b340ded7fa8e74a0a0f6db174cd84c4bece45a343cfe6fcb0b58e8240cc0568"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.593346 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7977978877-p7rd4" event={"ID":"381a1829-22f0-46b2-827d-92cc919105b8","Type":"ContainerStarted","Data":"063ac212d3cbc101dc63688fcd2003f3b24bb0d80b4f9cf65a3f67233e4e585d"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.594405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-82wxr" event={"ID":"121c0166-75c7-4f39-a07b-c89cb81d2fd8","Type":"ContainerStarted","Data":"057fdc3d90e854ae0c9233ae76abfd21fc0773043e75ffb2ccb775261f7b0670"} Jan 30 16:35:12 crc kubenswrapper[4766]: I0130 16:35:12.734454 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb"] Jan 30 16:35:12 crc kubenswrapper[4766]: W0130 16:35:12.740404 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7e34e5_c04e_4852_b4a3_9e28fd5f960d.slice/crio-732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b WatchSource:0}: Error finding container 732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b: Status 404 returned error can't find the container with id 732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.601098 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" event={"ID":"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d","Type":"ContainerStarted","Data":"732160be18988effa9953eb767ba818557b75127bb413541c9bddcfc827cdc3b"} Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.603135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7977978877-p7rd4" event={"ID":"381a1829-22f0-46b2-827d-92cc919105b8","Type":"ContainerStarted","Data":"a8c5bd6d627f0391f72c6ecdfbe2e7043c67e77f3961b40d56f8cbc123288c9d"} Jan 30 16:35:13 crc kubenswrapper[4766]: I0130 16:35:13.633093 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7977978877-p7rd4" podStartSLOduration=2.633075714 podStartE2EDuration="2.633075714s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:35:13.630333268 +0000 UTC m=+768.268290634" watchObservedRunningTime="2026-01-30 16:35:13.633075714 +0000 UTC m=+768.271033060" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.628508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" event={"ID":"d30ca6b4-bd87-4d25-92dd-f3d94410f2a3","Type":"ContainerStarted","Data":"43663a3c3948da6f3bb9050df62ced2f22d35c35389a220a1c58f97b160b4d2f"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.630503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-82wxr" event={"ID":"121c0166-75c7-4f39-a07b-c89cb81d2fd8","Type":"ContainerStarted","Data":"5f955387c142d6152aa72c93e3a22cc5b6418dcf260225b86468a4e7471ae981"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.631009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.632077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"06c964640e084928c5191bc00c31fa05177e2f9e8b07b0248d9ac652202402a8"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.633881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" event={"ID":"ed7e34e5-c04e-4852-b4a3-9e28fd5f960d","Type":"ContainerStarted","Data":"80c46d4f11c6b9af97ee8ade02b26f5e0c516804ceeab295df15b00d598a3c25"} Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.634449 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.649616 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d2p2z" podStartSLOduration=2.547173594 podStartE2EDuration="5.649600865s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.250355963 +0000 UTC m=+766.888313309" lastFinishedPulling="2026-01-30 16:35:15.352783224 +0000 UTC m=+769.990740580" observedRunningTime="2026-01-30 16:35:16.646744346 +0000 UTC m=+771.284701692" watchObservedRunningTime="2026-01-30 16:35:16.649600865 +0000 UTC m=+771.287558201" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.674535 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-82wxr" podStartSLOduration=2.25684625 podStartE2EDuration="5.674503152s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.003463439 +0000 UTC m=+766.641420785" lastFinishedPulling="2026-01-30 16:35:15.421120341 +0000 UTC m=+770.059077687" observedRunningTime="2026-01-30 16:35:16.664324322 +0000 UTC m=+771.302281678" watchObservedRunningTime="2026-01-30 16:35:16.674503152 +0000 UTC m=+771.312460498" Jan 30 16:35:16 crc kubenswrapper[4766]: I0130 16:35:16.707558 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" podStartSLOduration=3.005994386 podStartE2EDuration="5.707285707s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.743255646 +0000 UTC m=+767.381212992" lastFinishedPulling="2026-01-30 16:35:15.444546967 +0000 UTC m=+770.082504313" observedRunningTime="2026-01-30 16:35:16.688058516 +0000 UTC m=+771.326015862" watchObservedRunningTime="2026-01-30 16:35:16.707285707 +0000 UTC m=+771.345243053" Jan 30 16:35:18 crc kubenswrapper[4766]: I0130 16:35:18.645397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" event={"ID":"46ac0f62-2413-4258-a957-35039942d0f7","Type":"ContainerStarted","Data":"2c5cc759ad98f952f1e523184193bf2408e6e57c04cc9a0dd4ca4f335a3f34cd"} Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.006300 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-82wxr" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.023069 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-wv52c" podStartSLOduration=5.543252648 podStartE2EDuration="11.023049722s" podCreationTimestamp="2026-01-30 16:35:11 +0000 UTC" firstStartedPulling="2026-01-30 16:35:12.383004283 +0000 UTC m=+767.020961629" lastFinishedPulling="2026-01-30 16:35:17.862801367 +0000 UTC m=+772.500758703" observedRunningTime="2026-01-30 16:35:18.670819138 +0000 UTC m=+773.308776494" watchObservedRunningTime="2026-01-30 16:35:22.023049722 +0000 UTC m=+776.661007068" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.311544 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.311889 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.324139 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.674552 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7977978877-p7rd4" Jan 30 16:35:22 crc kubenswrapper[4766]: I0130 16:35:22.734228 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:32 crc kubenswrapper[4766]: I0130 16:35:32.563873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-zj7fb" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.045929 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.046688 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.046758 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.047548 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.047627 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" gracePeriod=600 Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768482 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" exitCode=0 Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768526 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08"} Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} Jan 30 16:35:39 crc kubenswrapper[4766]: I0130 16:35:39.768886 4766 scope.go:117] "RemoveContainer" containerID="2b6328ad3aaf373dc4a6f6fbe7d49ef2029c9f80f2a9eb0657102d9506d1cc4f" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.450661 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.453046 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.460030 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.460291 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.589810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.590053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.590103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.691867 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.691984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692040 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.692813 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.718801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:46 crc kubenswrapper[4766]: I0130 16:35:46.813392 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.013895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz"] Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.783708 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8fgxh" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" containerID="cri-o://a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" gracePeriod=15 Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829467 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="4ed8858646645e29e0e1f5dc3c37cd6744bc9c6d25d0edc3cd0331bfbd7f56f0" exitCode=0 Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"4ed8858646645e29e0e1f5dc3c37cd6744bc9c6d25d0edc3cd0331bfbd7f56f0"} Jan 30 16:35:47 crc kubenswrapper[4766]: I0130 16:35:47.829569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerStarted","Data":"8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.152720 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8fgxh_695ff148-b91d-49a2-ad3b-9a240f11e454/console/0.log" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.153078 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.314695 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.314770 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315363 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") pod \"695ff148-b91d-49a2-ad3b-9a240f11e454\" (UID: \"695ff148-b91d-49a2-ad3b-9a240f11e454\") " Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.315849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config" (OuterVolumeSpecName: "console-config") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316289 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.316618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca" (OuterVolumeSpecName: "service-ca") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333163 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333313 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf" (OuterVolumeSpecName: "kube-api-access-cb5jf") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "kube-api-access-cb5jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.333706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "695ff148-b91d-49a2-ad3b-9a240f11e454" (UID: "695ff148-b91d-49a2-ad3b-9a240f11e454"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417083 4766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417155 4766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417196 4766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417212 4766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/695ff148-b91d-49a2-ad3b-9a240f11e454-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417226 4766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417238 4766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/695ff148-b91d-49a2-ad3b-9a240f11e454-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.417251 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5jf\" (UniqueName: \"kubernetes.io/projected/695ff148-b91d-49a2-ad3b-9a240f11e454-kube-api-access-cb5jf\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837058 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8fgxh_695ff148-b91d-49a2-ad3b-9a240f11e454/console/0.log" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837109 4766 generic.go:334] "Generic (PLEG): container finished" podID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" exitCode=2 Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837141 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerDied","Data":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837169 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8fgxh" event={"ID":"695ff148-b91d-49a2-ad3b-9a240f11e454","Type":"ContainerDied","Data":"49a469bfbf32d87fdc9772eb7cb8b7a2cfda12f2178ff6d5d4530255ca2db5f7"} Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837210 4766 scope.go:117] "RemoveContainer" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.837232 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8fgxh" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.852335 4766 scope.go:117] "RemoveContainer" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: E0130 16:35:48.852916 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": container with ID starting with a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532 not found: ID does not exist" containerID="a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.852966 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532"} err="failed to get container status \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": rpc error: code = NotFound desc = could not find container \"a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532\": container with ID starting with a29f3a6e898177f84fd4104c6f79254b235a6b9d4f40d3ce7f40e1993c6d8532 not found: ID does not exist" Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.869289 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:48 crc kubenswrapper[4766]: I0130 16:35:48.873696 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8fgxh"] Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.807714 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:49 crc kubenswrapper[4766]: E0130 16:35:49.808323 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.808339 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.808455 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" containerName="console" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.809380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.824018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.851655 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="a29420d4a2ee559fdc0731a79df5db056cec11144a44618263f8a7fe5f30a7d0" exitCode=0 Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.851766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"a29420d4a2ee559fdc0731a79df5db056cec11144a44618263f8a7fe5f30a7d0"} Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:49 crc kubenswrapper[4766]: I0130 16:35:49.936521 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038433 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.038962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.039222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.046815 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695ff148-b91d-49a2-ad3b-9a240f11e454" path="/var/lib/kubelet/pods/695ff148-b91d-49a2-ad3b-9a240f11e454/volumes" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.062296 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"redhat-operators-j8lj5\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.150685 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.608387 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:35:50 crc kubenswrapper[4766]: W0130 16:35:50.612674 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3289ef2d_c514_4e8a_91f9_200f8b7742dd.slice/crio-280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea WatchSource:0}: Error finding container 280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea: Status 404 returned error can't find the container with id 280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.860365 4766 generic.go:334] "Generic (PLEG): container finished" podID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerID="cf60c97e9515f8005b247d17957550bd7aa3b775f838d376d06bdc764bba4d06" exitCode=0 Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.860419 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"cf60c97e9515f8005b247d17957550bd7aa3b775f838d376d06bdc764bba4d06"} Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862277 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" exitCode=0 Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862298 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2"} Jan 30 16:35:50 crc kubenswrapper[4766]: I0130 16:35:50.862311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea"} Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.150676 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.274910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") pod \"246ff80e-3711-4ffe-8fdb-0942844aef18\" (UID: \"246ff80e-3711-4ffe-8fdb-0942844aef18\") " Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.276461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle" (OuterVolumeSpecName: "bundle") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.283275 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g" (OuterVolumeSpecName: "kube-api-access-xfs8g") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "kube-api-access-xfs8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.291354 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util" (OuterVolumeSpecName: "util") pod "246ff80e-3711-4ffe-8fdb-0942844aef18" (UID: "246ff80e-3711-4ffe-8fdb-0942844aef18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376924 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfs8g\" (UniqueName: \"kubernetes.io/projected/246ff80e-3711-4ffe-8fdb-0942844aef18-kube-api-access-xfs8g\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376978 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.376992 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/246ff80e-3711-4ffe-8fdb-0942844aef18-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" event={"ID":"246ff80e-3711-4ffe-8fdb-0942844aef18","Type":"ContainerDied","Data":"8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2"} Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882202 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b75abe8f00db0e3e85c4aed6e0f3389ef161eb2a1e7781b57fc6abf8d5a0ca2" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.882218 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz" Jan 30 16:35:52 crc kubenswrapper[4766]: I0130 16:35:52.883804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} Jan 30 16:35:54 crc kubenswrapper[4766]: I0130 16:35:54.897631 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" exitCode=0 Jan 30 16:35:54 crc kubenswrapper[4766]: I0130 16:35:54.897663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} Jan 30 16:35:55 crc kubenswrapper[4766]: I0130 16:35:55.906379 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerStarted","Data":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} Jan 30 16:35:55 crc kubenswrapper[4766]: I0130 16:35:55.932114 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8lj5" podStartSLOduration=2.488849472 podStartE2EDuration="6.932092669s" podCreationTimestamp="2026-01-30 16:35:49 +0000 UTC" firstStartedPulling="2026-01-30 16:35:50.863485623 +0000 UTC m=+805.501442969" lastFinishedPulling="2026-01-30 16:35:55.30672882 +0000 UTC m=+809.944686166" observedRunningTime="2026-01-30 16:35:55.927553023 +0000 UTC m=+810.565510369" watchObservedRunningTime="2026-01-30 16:35:55.932092669 +0000 UTC m=+810.570050015" Jan 30 16:36:00 crc kubenswrapper[4766]: I0130 16:36:00.151570 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:00 crc kubenswrapper[4766]: I0130 16:36:00.153198 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.209921 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8lj5" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" probeResult="failure" output=< Jan 30 16:36:01 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:36:01 crc kubenswrapper[4766]: > Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762474 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762756 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="pull" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762771 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="pull" Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762790 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762797 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: E0130 16:36:01.762819 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="util" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762828 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="util" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.762947 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="246ff80e-3711-4ffe-8fdb-0942844aef18" containerName="extract" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.763447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771716 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8rlg6" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771795 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.771947 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.772000 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.772999 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.796791 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.910778 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.910859 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:01 crc kubenswrapper[4766]: I0130 16:36:01.911012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012763 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.012858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.020146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-webhook-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.020189 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f4ddea0-a380-401d-849f-6968d6d80e8b-apiservice-cert\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.034830 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbf2r\" (UniqueName: \"kubernetes.io/projected/8f4ddea0-a380-401d-849f-6968d6d80e8b-kube-api-access-pbf2r\") pod \"metallb-operator-controller-manager-5d87dd9885-cpjtx\" (UID: \"8f4ddea0-a380-401d-849f-6968d6d80e8b\") " pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.083780 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.113838 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.115656 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.120280 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.122580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xm4mf" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.122793 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.131647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.217633 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319262 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.319334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.325005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-apiservice-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.343591 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2xb\" (UniqueName: \"kubernetes.io/projected/5aa43b8e-3f06-441e-ade0-264da132ec73-kube-api-access-sw2xb\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.350891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5aa43b8e-3f06-441e-ade0-264da132ec73-webhook-cert\") pod \"metallb-operator-webhook-server-698996dc4d-5ps7v\" (UID: \"5aa43b8e-3f06-441e-ade0-264da132ec73\") " pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.468606 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.513700 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.710825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v"] Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.943081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" event={"ID":"5aa43b8e-3f06-441e-ade0-264da132ec73","Type":"ContainerStarted","Data":"9e619a9ff7144ad61b032bce7c1d57fa12b75d8a4555f752c788c8adf52acd7d"} Jan 30 16:36:02 crc kubenswrapper[4766]: I0130 16:36:02.944305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" event={"ID":"8f4ddea0-a380-401d-849f-6968d6d80e8b","Type":"ContainerStarted","Data":"8093a03634dc6f265c75fa34bf526f27f52e20cd6faf07d52462ad22f30e983d"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.988191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" event={"ID":"8f4ddea0-a380-401d-849f-6968d6d80e8b","Type":"ContainerStarted","Data":"6cb06be018cb1dc73deb3e06fa95c9c10d71b75c766628df10648d8b73b3dfdd"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.988794 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.990007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" event={"ID":"5aa43b8e-3f06-441e-ade0-264da132ec73","Type":"ContainerStarted","Data":"4b4363d975b03f0dd583639c564b496fec4e643ae2789f7f3bc429df5e7f9290"} Jan 30 16:36:08 crc kubenswrapper[4766]: I0130 16:36:08.990363 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:09 crc kubenswrapper[4766]: I0130 16:36:09.008699 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" podStartSLOduration=1.910686766 podStartE2EDuration="8.008682512s" podCreationTimestamp="2026-01-30 16:36:01 +0000 UTC" firstStartedPulling="2026-01-30 16:36:02.531942984 +0000 UTC m=+817.169900340" lastFinishedPulling="2026-01-30 16:36:08.62993874 +0000 UTC m=+823.267896086" observedRunningTime="2026-01-30 16:36:09.005728931 +0000 UTC m=+823.643686297" watchObservedRunningTime="2026-01-30 16:36:09.008682512 +0000 UTC m=+823.646639858" Jan 30 16:36:09 crc kubenswrapper[4766]: I0130 16:36:09.031425 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" podStartSLOduration=1.111027266 podStartE2EDuration="7.031404329s" podCreationTimestamp="2026-01-30 16:36:02 +0000 UTC" firstStartedPulling="2026-01-30 16:36:02.724719315 +0000 UTC m=+817.362676661" lastFinishedPulling="2026-01-30 16:36:08.645096388 +0000 UTC m=+823.283053724" observedRunningTime="2026-01-30 16:36:09.030270397 +0000 UTC m=+823.668227743" watchObservedRunningTime="2026-01-30 16:36:09.031404329 +0000 UTC m=+823.669361675" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.200890 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.245537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:10 crc kubenswrapper[4766]: I0130 16:36:10.436097 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:12 crc kubenswrapper[4766]: I0130 16:36:12.015233 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j8lj5" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" containerID="cri-o://279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" gracePeriod=2 Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.026105 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029669 4766 generic.go:334] "Generic (PLEG): container finished" podID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" exitCode=0 Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8lj5" event={"ID":"3289ef2d-c514-4e8a-91f9-200f8b7742dd","Type":"ContainerDied","Data":"280644e1d18a0d1e8d1142b1055140312017ca431290f10dd3e831116e441aea"} Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.029769 4766 scope.go:117] "RemoveContainer" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.056793 4766 scope.go:117] "RemoveContainer" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.079232 4766 scope.go:117] "RemoveContainer" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.097384 4766 scope.go:117] "RemoveContainer" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.098047 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": container with ID starting with 279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f not found: ID does not exist" containerID="279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.098109 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f"} err="failed to get container status \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": rpc error: code = NotFound desc = could not find container \"279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f\": container with ID starting with 279ab1c91a57f88b554459f91b5f022cd47b53bbb2b07d6dc8bbf2b21565bd4f not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.098143 4766 scope.go:117] "RemoveContainer" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.099800 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": container with ID starting with 6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5 not found: ID does not exist" containerID="6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100243 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5"} err="failed to get container status \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": rpc error: code = NotFound desc = could not find container \"6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5\": container with ID starting with 6335444566f76747a1bbabdfff09d019511a163d12f77c386abcd02273d0ace5 not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100277 4766 scope.go:117] "RemoveContainer" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: E0130 16:36:13.100712 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": container with ID starting with f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2 not found: ID does not exist" containerID="f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.100743 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2"} err="failed to get container status \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": rpc error: code = NotFound desc = could not find container \"f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2\": container with ID starting with f7f92f8300c8713d910aa6a0bca5661d6dd6450946457b864fc079e600e849f2 not found: ID does not exist" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184144 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.184239 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") pod \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\" (UID: \"3289ef2d-c514-4e8a-91f9-200f8b7742dd\") " Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.185132 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities" (OuterVolumeSpecName: "utilities") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.190241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf" (OuterVolumeSpecName: "kube-api-access-dvzmf") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "kube-api-access-dvzmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.286875 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvzmf\" (UniqueName: \"kubernetes.io/projected/3289ef2d-c514-4e8a-91f9-200f8b7742dd-kube-api-access-dvzmf\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.286923 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.292771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3289ef2d-c514-4e8a-91f9-200f8b7742dd" (UID: "3289ef2d-c514-4e8a-91f9-200f8b7742dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:36:13 crc kubenswrapper[4766]: I0130 16:36:13.388282 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3289ef2d-c514-4e8a-91f9-200f8b7742dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.035435 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8lj5" Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.074594 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:14 crc kubenswrapper[4766]: I0130 16:36:14.081256 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j8lj5"] Jan 30 16:36:16 crc kubenswrapper[4766]: I0130 16:36:16.049317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" path="/var/lib/kubelet/pods/3289ef2d-c514-4e8a-91f9-200f8b7742dd/volumes" Jan 30 16:36:22 crc kubenswrapper[4766]: I0130 16:36:22.479031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-698996dc4d-5ps7v" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.086729 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d87dd9885-cpjtx" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790526 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fr242"] Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790774 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-content" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790787 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-content" Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790798 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790803 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: E0130 16:36:42.790817 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-utilities" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790825 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="extract-utilities" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.790922 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3289ef2d-c514-4e8a-91f9-200f8b7742dd" containerName="registry-server" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.793348 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.796593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.796861 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.797482 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lr98l" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.812139 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.813104 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.814460 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.829889 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.894857 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pfspk"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.896090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfspk" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901228 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901273 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901312 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.901347 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-l98vt" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902008 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902096 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.902257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.935365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.938744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.940877 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 16:36:42 crc kubenswrapper[4766]: I0130 16:36:42.967854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003563 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003606 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003650 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003861 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.003942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.004015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.005383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-conf\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.005455 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4a563046-adc2-4e82-9b89-a549d3f06250-frr-startup\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.022459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-metrics\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.022727 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-frr-sockets\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.022831 4766 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.022912 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert podName:85bd5ff3-9577-4598-92a9-f24f00c56187 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.522889211 +0000 UTC m=+858.160846557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert") pod "frr-k8s-webhook-server-7df86c4f6c-z9cbg" (UID: "85bd5ff3-9577-4598-92a9-f24f00c56187") : secret "frr-k8s-webhook-server-cert" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.023582 4766 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.023641 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs podName:4a563046-adc2-4e82-9b89-a549d3f06250 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.523625291 +0000 UTC m=+858.161582637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs") pod "frr-k8s-fr242" (UID: "4a563046-adc2-4e82-9b89-a549d3f06250") : secret "frr-k8s-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.034149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4a563046-adc2-4e82-9b89-a549d3f06250-reloader\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.054482 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft95b\" (UniqueName: \"kubernetes.io/projected/4a563046-adc2-4e82-9b89-a549d3f06250-kube-api-access-ft95b\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.071250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllx9\" (UniqueName: \"kubernetes.io/projected/85bd5ff3-9577-4598-92a9-f24f00c56187-kube-api-access-kllx9\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105438 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105623 4766 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105660 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105700 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.605682876 +0000 UTC m=+858.243640272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "speaker-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.105755 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.605735757 +0000 UTC m=+858.243693103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105827 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.105867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.106477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metallb-excludel2\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.124486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhsvl\" (UniqueName: \"kubernetes.io/projected/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-kube-api-access-nhsvl\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207625 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.207824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.207977 4766 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.208047 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs podName:f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:43.708029281 +0000 UTC m=+858.345986627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs") pod "controller-6968d8fdc4-7v5hl" (UID: "f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873") : secret "controller-certs-secret" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.209354 4766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.223821 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-cert\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.225616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqxr\" (UniqueName: \"kubernetes.io/projected/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-kube-api-access-pvqxr\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.613681 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.613763 4766 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: E0130 16:36:43.613852 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist podName:4ad0227f-0410-4f5e-bfc5-7dd96164c9b5 nodeName:}" failed. No retries permitted until 2026-01-30 16:36:44.61382924 +0000 UTC m=+859.251786596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist") pod "speaker-pfspk" (UID: "4ad0227f-0410-4f5e-bfc5-7dd96164c9b5") : secret "metallb-memberlist" not found Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.617024 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a563046-adc2-4e82-9b89-a549d3f06250-metrics-certs\") pod \"frr-k8s-fr242\" (UID: \"4a563046-adc2-4e82-9b89-a549d3f06250\") " pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.617222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-metrics-certs\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.618383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85bd5ff3-9577-4598-92a9-f24f00c56187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-z9cbg\" (UID: \"85bd5ff3-9577-4598-92a9-f24f00c56187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.713724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.714552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.718993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873-metrics-certs\") pod \"controller-6968d8fdc4-7v5hl\" (UID: \"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873\") " pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.731806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:43 crc kubenswrapper[4766]: I0130 16:36:43.855208 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.096539 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7v5hl"] Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.103950 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f6fbd7_b3c4_4f9f_8689_6ef8bfffc873.slice/crio-88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259 WatchSource:0}: Error finding container 88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259: Status 404 returned error can't find the container with id 88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259 Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.173928 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg"] Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.184476 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bd5ff3_9577_4598_92a9_f24f00c56187.slice/crio-afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74 WatchSource:0}: Error finding container afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74: Status 404 returned error can't find the container with id afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74 Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.203490 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"7a57d12be9db33f7944867a5d5772a42224afe93156b5996ce7704ebaafb810b"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.206741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"88565e9d56893170f4bbeb1152c6f58f59120f50c3ea3e3d36aca1530b34e259"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.208208 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" event={"ID":"85bd5ff3-9577-4598-92a9-f24f00c56187","Type":"ContainerStarted","Data":"afc38c4549098e7769f5c2e30eeef2c49915e66311608c060eacc89327369a74"} Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.625526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.632990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4ad0227f-0410-4f5e-bfc5-7dd96164c9b5-memberlist\") pod \"speaker-pfspk\" (UID: \"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5\") " pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: I0130 16:36:44.712744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfspk" Jan 30 16:36:44 crc kubenswrapper[4766]: W0130 16:36:44.734088 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad0227f_0410_4f5e_bfc5_7dd96164c9b5.slice/crio-8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df WatchSource:0}: Error finding container 8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df: Status 404 returned error can't find the container with id 8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.216862 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"382ee6499400dc94efa59f0668fcda135b7569b8752a0f523567aecfc009ebde"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.217345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.217365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7v5hl" event={"ID":"f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873","Type":"ContainerStarted","Data":"b62e4e90d51a1b2ce278ac45697a19f01a3546f6bd182006d30a7104b5d374f1"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.218469 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"c8717d2d48641eff4fc5b1b9212396898a8a851794941527db40589ddbad6bea"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.218513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"8e386a4b97d396ffdbfac0da34197f2fbdfb2b1d8b4da282b30916f5c44ca6df"} Jan 30 16:36:45 crc kubenswrapper[4766]: I0130 16:36:45.235328 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7v5hl" podStartSLOduration=3.23530828 podStartE2EDuration="3.23530828s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:36:45.232095971 +0000 UTC m=+859.870053327" watchObservedRunningTime="2026-01-30 16:36:45.23530828 +0000 UTC m=+859.873265626" Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.238367 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfspk" event={"ID":"4ad0227f-0410-4f5e-bfc5-7dd96164c9b5","Type":"ContainerStarted","Data":"fae748f14ee98118df529619f1e5571f008377e981a821124004c21af1051271"} Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.238538 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pfspk" Jan 30 16:36:46 crc kubenswrapper[4766]: I0130 16:36:46.265139 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pfspk" podStartSLOduration=4.265121922 podStartE2EDuration="4.265121922s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:36:46.263291531 +0000 UTC m=+860.901248907" watchObservedRunningTime="2026-01-30 16:36:46.265121922 +0000 UTC m=+860.903079268" Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.280830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" event={"ID":"85bd5ff3-9577-4598-92a9-f24f00c56187","Type":"ContainerStarted","Data":"44d13023bd8846f5d03e9ed900ff2395ae7f6c094a213ac4119198efb563e41e"} Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.281431 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.282777 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="c8430d33a25d7dfebc61cdfe3fa72c14282cac69cf25a679cf1b274982e79c2c" exitCode=0 Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.282814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"c8430d33a25d7dfebc61cdfe3fa72c14282cac69cf25a679cf1b274982e79c2c"} Jan 30 16:36:52 crc kubenswrapper[4766]: I0130 16:36:52.297909 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" podStartSLOduration=2.719123585 podStartE2EDuration="10.297893317s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="2026-01-30 16:36:44.187975856 +0000 UTC m=+858.825933202" lastFinishedPulling="2026-01-30 16:36:51.766745588 +0000 UTC m=+866.404702934" observedRunningTime="2026-01-30 16:36:52.297730312 +0000 UTC m=+866.935687678" watchObservedRunningTime="2026-01-30 16:36:52.297893317 +0000 UTC m=+866.935850663" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.158477 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.159778 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.170153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257730 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.257774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.292076 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="3d9a5e34b7fc44db8f475d327060cd21e9f5b5ba7f5587d9fc0f1eea1c0dafc5" exitCode=0 Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.292447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"3d9a5e34b7fc44db8f475d327060cd21e9f5b5ba7f5587d9fc0f1eea1c0dafc5"} Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.358850 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359329 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.359997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.360218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.384730 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"redhat-marketplace-vxf97\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.478454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:36:53 crc kubenswrapper[4766]: I0130 16:36:53.702911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.300869 4766 generic.go:334] "Generic (PLEG): container finished" podID="4a563046-adc2-4e82-9b89-a549d3f06250" containerID="abf4bfd5c5ae534c7f2f2661737b4d1b3a1f987982416c44e5f867efc92dc5df" exitCode=0 Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.300958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerDied","Data":"abf4bfd5c5ae534c7f2f2661737b4d1b3a1f987982416c44e5f867efc92dc5df"} Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.303485 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" exitCode=0 Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.304485 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2"} Jan 30 16:36:54 crc kubenswrapper[4766]: I0130 16:36:54.304575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"61610dab44c5f75f053174ef3d6dd6d46a8f7dfdffe1f5a823849014fc14712e"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"a7a4297585f60a5fb97fb762e65def52b2100d59397b348ca9bd938d92d2e9da"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325720 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"94604a8c7c937828a757accec4fc2325738b11249749b12f406bd41a73640f81"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325729 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"2f6caad5ff6f24679e25993fab9707766d6967277ef04bebefd1400d0e1f6f62"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"83ee4acf4c0431e29bbe4cffe5fe7ac7994acc1097aa42ae5f809f0fed43ff25"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.325747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"484c406d7229ad94d5a4d0d213d9c2163e2ef33675b290bddc271c8e30414915"} Jan 30 16:36:55 crc kubenswrapper[4766]: I0130 16:36:55.327067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.334359 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" exitCode=0 Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.334488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.338555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fr242" event={"ID":"4a563046-adc2-4e82-9b89-a549d3f06250","Type":"ContainerStarted","Data":"e5d6973d5b8e0393c5c52c923814601658e1e4030da75738e9288444c5a5cb12"} Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.339326 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:56 crc kubenswrapper[4766]: I0130 16:36:56.379792 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fr242" podStartSLOduration=6.504529336 podStartE2EDuration="14.37977469s" podCreationTimestamp="2026-01-30 16:36:42 +0000 UTC" firstStartedPulling="2026-01-30 16:36:43.878669139 +0000 UTC m=+858.516626485" lastFinishedPulling="2026-01-30 16:36:51.753914493 +0000 UTC m=+866.391871839" observedRunningTime="2026-01-30 16:36:56.373759983 +0000 UTC m=+871.011717339" watchObservedRunningTime="2026-01-30 16:36:56.37977469 +0000 UTC m=+871.017732036" Jan 30 16:36:57 crc kubenswrapper[4766]: I0130 16:36:57.346946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerStarted","Data":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.714640 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.754519 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fr242" Jan 30 16:36:58 crc kubenswrapper[4766]: I0130 16:36:58.777627 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxf97" podStartSLOduration=3.341668958 podStartE2EDuration="5.777609136s" podCreationTimestamp="2026-01-30 16:36:53 +0000 UTC" firstStartedPulling="2026-01-30 16:36:54.306016717 +0000 UTC m=+868.943974063" lastFinishedPulling="2026-01-30 16:36:56.741956895 +0000 UTC m=+871.379914241" observedRunningTime="2026-01-30 16:36:57.380161539 +0000 UTC m=+872.018118895" watchObservedRunningTime="2026-01-30 16:36:58.777609136 +0000 UTC m=+873.415566482" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.479735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.480123 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.519971 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.776229 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-z9cbg" Jan 30 16:37:03 crc kubenswrapper[4766]: I0130 16:37:03.859075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7v5hl" Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.446671 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.496988 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:04 crc kubenswrapper[4766]: I0130 16:37:04.716261 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pfspk" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.219748 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.223386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.225606 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.233007 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.250818 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.250929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.251032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.352907 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.353244 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.353387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.354467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.354543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.374669 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.398267 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxf97" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" containerID="cri-o://4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" gracePeriod=2 Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.550444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.802076 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.862087 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.863382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.863526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") pod \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\" (UID: \"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93\") " Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.864260 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities" (OuterVolumeSpecName: "utilities") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.870315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95" (OuterVolumeSpecName: "kube-api-access-w8b95") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "kube-api-access-w8b95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.885064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" (UID: "ce9dc1b9-f415-411b-a16d-88cd7b5a7f93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965491 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965544 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:06 crc kubenswrapper[4766]: I0130 16:37:06.965557 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8b95\" (UniqueName: \"kubernetes.io/projected/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93-kube-api-access-w8b95\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.041452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs"] Jan 30 16:37:07 crc kubenswrapper[4766]: W0130 16:37:07.047282 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2619907_b01e_44ad_99e7_a1ae313da017.slice/crio-4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b WatchSource:0}: Error finding container 4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b: Status 404 returned error can't find the container with id 4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408036 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="1959e6dd1b2ba4a3477f420a2bbea12940cdba112a8ba32bc20c6d9dfec9ca9b" exitCode=0 Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"1959e6dd1b2ba4a3477f420a2bbea12940cdba112a8ba32bc20c6d9dfec9ca9b"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.408206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerStarted","Data":"4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413237 4766 generic.go:334] "Generic (PLEG): container finished" podID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" exitCode=0 Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413303 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxf97" event={"ID":"ce9dc1b9-f415-411b-a16d-88cd7b5a7f93","Type":"ContainerDied","Data":"61610dab44c5f75f053174ef3d6dd6d46a8f7dfdffe1f5a823849014fc14712e"} Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413322 4766 scope.go:117] "RemoveContainer" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.413438 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxf97" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.439664 4766 scope.go:117] "RemoveContainer" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.458963 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.464511 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxf97"] Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.482055 4766 scope.go:117] "RemoveContainer" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.497531 4766 scope.go:117] "RemoveContainer" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.498079 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": container with ID starting with 4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc not found: ID does not exist" containerID="4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.498199 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc"} err="failed to get container status \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": rpc error: code = NotFound desc = could not find container \"4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc\": container with ID starting with 4802e8e3e99295152b650c423951e2a5ce30714756f491f37fa685c64257e9cc not found: ID does not exist" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.498288 4766 scope.go:117] "RemoveContainer" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.498951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": container with ID starting with 47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733 not found: ID does not exist" containerID="47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.499002 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733"} err="failed to get container status \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": rpc error: code = NotFound desc = could not find container \"47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733\": container with ID starting with 47fa4c9d02f627c53c6850d21c1b33cf039d947ef751f3cb370819aa9f64c733 not found: ID does not exist" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.499036 4766 scope.go:117] "RemoveContainer" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: E0130 16:37:07.500013 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": container with ID starting with 4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2 not found: ID does not exist" containerID="4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2" Jan 30 16:37:07 crc kubenswrapper[4766]: I0130 16:37:07.500411 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2"} err="failed to get container status \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": rpc error: code = NotFound desc = could not find container \"4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2\": container with ID starting with 4a7ad1bd633774d0622fbb6f585fc2c5583634d9cc452aacda6d68763bee66d2 not found: ID does not exist" Jan 30 16:37:08 crc kubenswrapper[4766]: I0130 16:37:08.047050 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" path="/var/lib/kubelet/pods/ce9dc1b9-f415-411b-a16d-88cd7b5a7f93/volumes" Jan 30 16:37:11 crc kubenswrapper[4766]: I0130 16:37:11.444794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerStarted","Data":"94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225"} Jan 30 16:37:12 crc kubenswrapper[4766]: I0130 16:37:12.451857 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225" exitCode=0 Jan 30 16:37:12 crc kubenswrapper[4766]: I0130 16:37:12.451910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"94babe5601723e188e65da3e57660ba884fd9bbdb91ef8019028b7dcb8285225"} Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.461450 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2619907-b01e-44ad-99e7-a1ae313da017" containerID="5ae0733e68fbb3ac77bf630446515e496329b8e9a3abab1728364758f402ac1e" exitCode=0 Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.461555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"5ae0733e68fbb3ac77bf630446515e496329b8e9a3abab1728364758f402ac1e"} Jan 30 16:37:13 crc kubenswrapper[4766]: I0130 16:37:13.718069 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fr242" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.755652 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896201 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.896457 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") pod \"a2619907-b01e-44ad-99e7-a1ae313da017\" (UID: \"a2619907-b01e-44ad-99e7-a1ae313da017\") " Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.897817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle" (OuterVolumeSpecName: "bundle") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.898230 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.904359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c" (OuterVolumeSpecName: "kube-api-access-hgm9c") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "kube-api-access-hgm9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.908467 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util" (OuterVolumeSpecName: "util") pod "a2619907-b01e-44ad-99e7-a1ae313da017" (UID: "a2619907-b01e-44ad-99e7-a1ae313da017"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.999649 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2619907-b01e-44ad-99e7-a1ae313da017-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:14 crc kubenswrapper[4766]: I0130 16:37:14.999702 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgm9c\" (UniqueName: \"kubernetes.io/projected/a2619907-b01e-44ad-99e7-a1ae313da017-kube-api-access-hgm9c\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" event={"ID":"a2619907-b01e-44ad-99e7-a1ae313da017","Type":"ContainerDied","Data":"4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b"} Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476070 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4790a06ad10e693c077ef5fea99a5eddb9f1b9aed3163e69b2cf11273475458b" Jan 30 16:37:15 crc kubenswrapper[4766]: I0130 16:37:15.476075 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.703431 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704134 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704146 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704156 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="util" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704162 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="util" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704220 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704227 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704235 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-content" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704241 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-content" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704252 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="pull" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="pull" Jan 30 16:37:19 crc kubenswrapper[4766]: E0130 16:37:19.704274 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-utilities" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704280 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="extract-utilities" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704384 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2619907-b01e-44ad-99e7-a1ae313da017" containerName="extract" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704394 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9dc1b9-f415-411b-a16d-88cd7b5a7f93" containerName="registry-server" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.704794 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.708306 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8cqrb" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.708936 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.709005 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.719228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.776220 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.776381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.877380 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.877449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.878080 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e8d87956-3550-49b7-957e-56d39f9b81bf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:19 crc kubenswrapper[4766]: I0130 16:37:19.901113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxqm\" (UniqueName: \"kubernetes.io/projected/e8d87956-3550-49b7-957e-56d39f9b81bf-kube-api-access-nnxqm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-xhxlw\" (UID: \"e8d87956-3550-49b7-957e-56d39f9b81bf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.024396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.281591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw"] Jan 30 16:37:20 crc kubenswrapper[4766]: I0130 16:37:20.507598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" event={"ID":"e8d87956-3550-49b7-957e-56d39f9b81bf","Type":"ContainerStarted","Data":"c08571034286cbcc6601ec1daa16af854ca6fdd1c46435726ba7a2914558aadb"} Jan 30 16:37:24 crc kubenswrapper[4766]: I0130 16:37:24.535974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" event={"ID":"e8d87956-3550-49b7-957e-56d39f9b81bf","Type":"ContainerStarted","Data":"50a6768314658c0aef5a8eaa9d961cf800a9675d42cdb608e4907e5c06746de3"} Jan 30 16:37:24 crc kubenswrapper[4766]: I0130 16:37:24.559342 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-xhxlw" podStartSLOduration=2.093519601 podStartE2EDuration="5.559321131s" podCreationTimestamp="2026-01-30 16:37:19 +0000 UTC" firstStartedPulling="2026-01-30 16:37:20.289277915 +0000 UTC m=+894.927235261" lastFinishedPulling="2026-01-30 16:37:23.755079445 +0000 UTC m=+898.393036791" observedRunningTime="2026-01-30 16:37:24.558015884 +0000 UTC m=+899.195973260" watchObservedRunningTime="2026-01-30 16:37:24.559321131 +0000 UTC m=+899.197278477" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.964509 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.965992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.969749 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:25 crc kubenswrapper[4766]: I0130 16:37:25.997261 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.071830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.072003 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.072106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.074120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.074244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.106390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"community-operators-25857\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.289432 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:26 crc kubenswrapper[4766]: I0130 16:37:26.969893 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574358 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" exitCode=0 Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574688 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d"} Jan 30 16:37:27 crc kubenswrapper[4766]: I0130 16:37:27.574716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"abb8fe20ff8febe1d8453814192f1c606f41fd3fcad611e77b0dc1734c540c56"} Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.598002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.948965 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.950265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.958506 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.958694 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.959209 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vjnc8" Jan 30 16:37:28 crc kubenswrapper[4766]: I0130 16:37:28.964506 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.026752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.027041 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.128051 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.128223 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.156544 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn85k\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-kube-api-access-pn85k\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.167286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1682925-c14f-425a-b072-535a37cdca48-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qr6lx\" (UID: \"b1682925-c14f-425a-b072-535a37cdca48\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.275618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.608716 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" exitCode=0 Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.608914 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} Jan 30 16:37:29 crc kubenswrapper[4766]: I0130 16:37:29.729992 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qr6lx"] Jan 30 16:37:29 crc kubenswrapper[4766]: W0130 16:37:29.732398 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1682925_c14f_425a_b072_535a37cdca48.slice/crio-a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4 WatchSource:0}: Error finding container a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4: Status 404 returned error can't find the container with id a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4 Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.619677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerStarted","Data":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.621955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" event={"ID":"b1682925-c14f-425a-b072-535a37cdca48","Type":"ContainerStarted","Data":"a489f0174978dd43cf9294ccd37aa816d7a7ace52ef27977f31e4e7e93ab59f4"} Jan 30 16:37:30 crc kubenswrapper[4766]: I0130 16:37:30.641094 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25857" podStartSLOduration=3.206846806 podStartE2EDuration="5.641056096s" podCreationTimestamp="2026-01-30 16:37:25 +0000 UTC" firstStartedPulling="2026-01-30 16:37:27.576022647 +0000 UTC m=+902.213979993" lastFinishedPulling="2026-01-30 16:37:30.010231937 +0000 UTC m=+904.648189283" observedRunningTime="2026-01-30 16:37:30.636804318 +0000 UTC m=+905.274761674" watchObservedRunningTime="2026-01-30 16:37:30.641056096 +0000 UTC m=+905.279013442" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.313297 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.317943 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.320424 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cw248" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.327047 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.395361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.395422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.497031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.497152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.516957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.517299 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwnl\" (UniqueName: \"kubernetes.io/projected/92fa5747-17c3-4b1c-a66a-e8b0a1d6f622-kube-api-access-9lwnl\") pod \"cert-manager-webhook-6888856db4-ltbxj\" (UID: \"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622\") " pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:33 crc kubenswrapper[4766]: I0130 16:37:33.641400 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.093576 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ltbxj"] Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.290474 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.290537 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.345232 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" event={"ID":"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622","Type":"ContainerStarted","Data":"32f83a3c64eea8078a35cbaf0925938f5d65ed7722efc40056d7bc1f58237195"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665866 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" event={"ID":"92fa5747-17c3-4b1c-a66a-e8b0a1d6f622","Type":"ContainerStarted","Data":"ff2b7038c2108e42282951273b6f2371080942271940a12a3292f3c8698d0cf8"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.665893 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.667683 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" event={"ID":"b1682925-c14f-425a-b072-535a37cdca48","Type":"ContainerStarted","Data":"d1385ff266f168681398029c2230e913564ad9923fbbfeaf2f50103fb3bff937"} Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.687908 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" podStartSLOduration=3.687876939 podStartE2EDuration="3.687876939s" podCreationTimestamp="2026-01-30 16:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:37:36.682047828 +0000 UTC m=+911.320005174" watchObservedRunningTime="2026-01-30 16:37:36.687876939 +0000 UTC m=+911.325834285" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.711889 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qr6lx" podStartSLOduration=2.624086076 podStartE2EDuration="8.711863051s" podCreationTimestamp="2026-01-30 16:37:28 +0000 UTC" firstStartedPulling="2026-01-30 16:37:29.735236176 +0000 UTC m=+904.373193522" lastFinishedPulling="2026-01-30 16:37:35.823013151 +0000 UTC m=+910.460970497" observedRunningTime="2026-01-30 16:37:36.698952184 +0000 UTC m=+911.336909530" watchObservedRunningTime="2026-01-30 16:37:36.711863051 +0000 UTC m=+911.349820397" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.730995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:36 crc kubenswrapper[4766]: I0130 16:37:36.793838 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:38 crc kubenswrapper[4766]: I0130 16:37:38.684039 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25857" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" containerID="cri-o://8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" gracePeriod=2 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.017302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.018927 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.031726 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.045295 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.045419 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081660 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.081782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.095381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.182819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183255 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183342 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") pod \"0a17fb46-17ee-46fe-9e72-540aa19604cf\" (UID: \"0a17fb46-17ee-46fe-9e72-540aa19604cf\") " Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.183676 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.184168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.185214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.185670 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities" (OuterVolumeSpecName: "utilities") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.189424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f" (OuterVolumeSpecName: "kube-api-access-dbx2f") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "kube-api-access-dbx2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.200419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"certified-operators-fq6d9\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.243879 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a17fb46-17ee-46fe-9e72-540aa19604cf" (UID: "0a17fb46-17ee-46fe-9e72-540aa19604cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284774 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284818 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbx2f\" (UniqueName: \"kubernetes.io/projected/0a17fb46-17ee-46fe-9e72-540aa19604cf-kube-api-access-dbx2f\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.284835 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a17fb46-17ee-46fe-9e72-540aa19604cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.391845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.652377 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:39 crc kubenswrapper[4766]: W0130 16:37:39.672832 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf6c0939_3788_45ef_b4d3_0f198fb4039f.slice/crio-33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608 WatchSource:0}: Error finding container 33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608: Status 404 returned error can't find the container with id 33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.691635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerStarted","Data":"33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698668 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" exitCode=0 Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25857" event={"ID":"0a17fb46-17ee-46fe-9e72-540aa19604cf","Type":"ContainerDied","Data":"abb8fe20ff8febe1d8453814192f1c606f41fd3fcad611e77b0dc1734c540c56"} Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698764 4766 scope.go:117] "RemoveContainer" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.698768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25857" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.721519 4766 scope.go:117] "RemoveContainer" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.743395 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.749499 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25857"] Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.757011 4766 scope.go:117] "RemoveContainer" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.784040 4766 scope.go:117] "RemoveContainer" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.785462 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": container with ID starting with 8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36 not found: ID does not exist" containerID="8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.785550 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36"} err="failed to get container status \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": rpc error: code = NotFound desc = could not find container \"8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36\": container with ID starting with 8fa62bbbc4386a577f5ad9f9d98ca83483a8d4f332baf4f9d8900f2e02dd5e36 not found: ID does not exist" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.785596 4766 scope.go:117] "RemoveContainer" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.788567 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": container with ID starting with eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81 not found: ID does not exist" containerID="eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.788603 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81"} err="failed to get container status \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": rpc error: code = NotFound desc = could not find container \"eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81\": container with ID starting with eca05fb6948c99e1a42ccccd551fd46bc9e4515706e8802e4d5f7197fbe55b81 not found: ID does not exist" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.788626 4766 scope.go:117] "RemoveContainer" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: E0130 16:37:39.789097 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": container with ID starting with a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d not found: ID does not exist" containerID="a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d" Jan 30 16:37:39 crc kubenswrapper[4766]: I0130 16:37:39.789139 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d"} err="failed to get container status \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": rpc error: code = NotFound desc = could not find container \"a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d\": container with ID starting with a17ed51550b4a9e534b97bd55f68d2055e0aeaa5469e71c2bdc954f62cca0a4d not found: ID does not exist" Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.053835 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" path="/var/lib/kubelet/pods/0a17fb46-17ee-46fe-9e72-540aa19604cf/volumes" Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.705301 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" exitCode=0 Jan 30 16:37:40 crc kubenswrapper[4766]: I0130 16:37:40.705355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b"} Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.645234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-ltbxj" Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.728279 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" exitCode=0 Jan 30 16:37:43 crc kubenswrapper[4766]: I0130 16:37:43.728314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7"} Jan 30 16:37:44 crc kubenswrapper[4766]: I0130 16:37:44.737944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerStarted","Data":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} Jan 30 16:37:44 crc kubenswrapper[4766]: I0130 16:37:44.769953 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq6d9" podStartSLOduration=2.33127993 podStartE2EDuration="5.769933401s" podCreationTimestamp="2026-01-30 16:37:39 +0000 UTC" firstStartedPulling="2026-01-30 16:37:40.706457456 +0000 UTC m=+915.344414802" lastFinishedPulling="2026-01-30 16:37:44.145110927 +0000 UTC m=+918.783068273" observedRunningTime="2026-01-30 16:37:44.767899265 +0000 UTC m=+919.405856611" watchObservedRunningTime="2026-01-30 16:37:44.769933401 +0000 UTC m=+919.407890747" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.945999 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946328 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-content" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946349 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-content" Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946377 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: E0130 16:37:45.946393 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-utilities" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946401 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="extract-utilities" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.946523 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a17fb46-17ee-46fe-9e72-540aa19604cf" containerName="registry-server" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.947035 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.948886 4766 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zh89x" Jan 30 16:37:45 crc kubenswrapper[4766]: I0130 16:37:45.957561 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.078151 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.078366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.180089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.180236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.198979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-bound-sa-token\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.199574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwttj\" (UniqueName: \"kubernetes.io/projected/d635eb48-c2c9-404e-9ffb-c8385134670b-kube-api-access-wwttj\") pod \"cert-manager-545d4d4674-9lmrd\" (UID: \"d635eb48-c2c9-404e-9ffb-c8385134670b\") " pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.266653 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9lmrd" Jan 30 16:37:46 crc kubenswrapper[4766]: I0130 16:37:46.762956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9lmrd"] Jan 30 16:37:46 crc kubenswrapper[4766]: W0130 16:37:46.767668 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd635eb48_c2c9_404e_9ffb_c8385134670b.slice/crio-f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba WatchSource:0}: Error finding container f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba: Status 404 returned error can't find the container with id f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.755863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9lmrd" event={"ID":"d635eb48-c2c9-404e-9ffb-c8385134670b","Type":"ContainerStarted","Data":"9242049d725f16b934a52e4def0df41908d2236ea945d97505f28750b7fa9d29"} Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.756543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9lmrd" event={"ID":"d635eb48-c2c9-404e-9ffb-c8385134670b","Type":"ContainerStarted","Data":"f484b35e9bfdaea1a71d49ee61449bdba444d4e3841229cad3c58ea55052e2ba"} Jan 30 16:37:47 crc kubenswrapper[4766]: I0130 16:37:47.776914 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-9lmrd" podStartSLOduration=2.776887999 podStartE2EDuration="2.776887999s" podCreationTimestamp="2026-01-30 16:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:37:47.771838529 +0000 UTC m=+922.409795875" watchObservedRunningTime="2026-01-30 16:37:47.776887999 +0000 UTC m=+922.414845345" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.392111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.393143 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.433950 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.808980 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:49 crc kubenswrapper[4766]: I0130 16:37:49.850809 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:51 crc kubenswrapper[4766]: I0130 16:37:51.778108 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fq6d9" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" containerID="cri-o://6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" gracePeriod=2 Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.137718 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290035 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.290336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") pod \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\" (UID: \"bf6c0939-3788-45ef-b4d3-0f198fb4039f\") " Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.291225 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities" (OuterVolumeSpecName: "utilities") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.302424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn" (OuterVolumeSpecName: "kube-api-access-kkcfn") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "kube-api-access-kkcfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.349535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf6c0939-3788-45ef-b4d3-0f198fb4039f" (UID: "bf6c0939-3788-45ef-b4d3-0f198fb4039f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392609 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392671 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkcfn\" (UniqueName: \"kubernetes.io/projected/bf6c0939-3788-45ef-b4d3-0f198fb4039f-kube-api-access-kkcfn\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.392685 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6c0939-3788-45ef-b4d3-0f198fb4039f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787830 4766 generic.go:334] "Generic (PLEG): container finished" podID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" exitCode=0 Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq6d9" event={"ID":"bf6c0939-3788-45ef-b4d3-0f198fb4039f","Type":"ContainerDied","Data":"33556d967db280ae11cd1592886d01bc97df1a2144ca2230206f0224167e7608"} Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787921 4766 scope.go:117] "RemoveContainer" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.787925 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq6d9" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.809326 4766 scope.go:117] "RemoveContainer" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.815628 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.821128 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fq6d9"] Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.838403 4766 scope.go:117] "RemoveContainer" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859450 4766 scope.go:117] "RemoveContainer" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.859909 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": container with ID starting with 6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f not found: ID does not exist" containerID="6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859938 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f"} err="failed to get container status \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": rpc error: code = NotFound desc = could not find container \"6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f\": container with ID starting with 6e87b5fc00cecc4612d05934906aaa4d439d921874116adb65d556fc8be12b6f not found: ID does not exist" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.859959 4766 scope.go:117] "RemoveContainer" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.860312 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": container with ID starting with c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7 not found: ID does not exist" containerID="c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860340 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7"} err="failed to get container status \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": rpc error: code = NotFound desc = could not find container \"c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7\": container with ID starting with c5dc63f8e1d6f336d42fe7547ed132ce90530746f0e8c389ac7c00359038c7a7 not found: ID does not exist" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860354 4766 scope.go:117] "RemoveContainer" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: E0130 16:37:52.860581 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": container with ID starting with f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b not found: ID does not exist" containerID="f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b" Jan 30 16:37:52 crc kubenswrapper[4766]: I0130 16:37:52.860602 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b"} err="failed to get container status \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": rpc error: code = NotFound desc = could not find container \"f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b\": container with ID starting with f13d8029879008eb7e8943c08a404c67bb9657e75ab38741484f75ee50c5720b not found: ID does not exist" Jan 30 16:37:54 crc kubenswrapper[4766]: I0130 16:37:54.048404 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" path="/var/lib/kubelet/pods/bf6c0939-3788-45ef-b4d3-0f198fb4039f/volumes" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.712942 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713559 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713577 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713598 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-utilities" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713608 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-utilities" Jan 30 16:37:56 crc kubenswrapper[4766]: E0130 16:37:56.713627 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-content" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="extract-content" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.713775 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6c0939-3788-45ef-b4d3-0f198fb4039f" containerName="registry-server" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.714371 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.716942 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.717153 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.721665 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-6r4tz" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.744063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.855680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.957117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:56 crc kubenswrapper[4766]: I0130 16:37:56.976993 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"openstack-operator-index-tkmgw\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.039147 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.468247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:57 crc kubenswrapper[4766]: I0130 16:37:57.824984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerStarted","Data":"bd66530389ff5553db017967ddf2037ad50e201e2f1dfc09574b461b85f741e1"} Jan 30 16:37:58 crc kubenswrapper[4766]: I0130 16:37:58.966884 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.373774 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.374616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.380310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.502394 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.604533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.629238 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lb2\" (UniqueName: \"kubernetes.io/projected/502b8426-9711-4e00-b59f-743352003f2b-kube-api-access-45lb2\") pod \"openstack-operator-index-dpb9n\" (UID: \"502b8426-9711-4e00-b59f-743352003f2b\") " pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.698036 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.840009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerStarted","Data":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.840168 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tkmgw" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" containerID="cri-o://b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" gracePeriod=2 Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.865591 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tkmgw" podStartSLOduration=1.7316621639999998 podStartE2EDuration="3.865571867s" podCreationTimestamp="2026-01-30 16:37:56 +0000 UTC" firstStartedPulling="2026-01-30 16:37:57.476572204 +0000 UTC m=+932.114529550" lastFinishedPulling="2026-01-30 16:37:59.610481907 +0000 UTC m=+934.248439253" observedRunningTime="2026-01-30 16:37:59.863301554 +0000 UTC m=+934.501258900" watchObservedRunningTime="2026-01-30 16:37:59.865571867 +0000 UTC m=+934.503529233" Jan 30 16:37:59 crc kubenswrapper[4766]: I0130 16:37:59.901723 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dpb9n"] Jan 30 16:37:59 crc kubenswrapper[4766]: W0130 16:37:59.934995 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod502b8426_9711_4e00_b59f_743352003f2b.slice/crio-60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6 WatchSource:0}: Error finding container 60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6: Status 404 returned error can't find the container with id 60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6 Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.181679 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.313629 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") pod \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\" (UID: \"cd84aed8-c9c3-4e8d-b212-13955a78d7b4\") " Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.318866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp" (OuterVolumeSpecName: "kube-api-access-f7jnp") pod "cd84aed8-c9c3-4e8d-b212-13955a78d7b4" (UID: "cd84aed8-c9c3-4e8d-b212-13955a78d7b4"). InnerVolumeSpecName "kube-api-access-f7jnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.415643 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7jnp\" (UniqueName: \"kubernetes.io/projected/cd84aed8-c9c3-4e8d-b212-13955a78d7b4-kube-api-access-f7jnp\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846376 4766 generic.go:334] "Generic (PLEG): container finished" podID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" exitCode=0 Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846440 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tkmgw" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerDied","Data":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tkmgw" event={"ID":"cd84aed8-c9c3-4e8d-b212-13955a78d7b4","Type":"ContainerDied","Data":"bd66530389ff5553db017967ddf2037ad50e201e2f1dfc09574b461b85f741e1"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.846618 4766 scope.go:117] "RemoveContainer" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.847695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dpb9n" event={"ID":"502b8426-9711-4e00-b59f-743352003f2b","Type":"ContainerStarted","Data":"05f190037886438d95b7be40f1bbfe4211027858d8fb86e5cfdb5159cf018c79"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.847725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dpb9n" event={"ID":"502b8426-9711-4e00-b59f-743352003f2b","Type":"ContainerStarted","Data":"60d7feb08490fe05c5a4cd1658be3e0b09481194b2b7e38d9e3887e3045fffc6"} Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.866372 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dpb9n" podStartSLOduration=1.82262437 podStartE2EDuration="1.866352017s" podCreationTimestamp="2026-01-30 16:37:59 +0000 UTC" firstStartedPulling="2026-01-30 16:37:59.94176774 +0000 UTC m=+934.579725076" lastFinishedPulling="2026-01-30 16:37:59.985495367 +0000 UTC m=+934.623452723" observedRunningTime="2026-01-30 16:38:00.864554767 +0000 UTC m=+935.502512113" watchObservedRunningTime="2026-01-30 16:38:00.866352017 +0000 UTC m=+935.504309373" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.868785 4766 scope.go:117] "RemoveContainer" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: E0130 16:38:00.869431 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": container with ID starting with b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582 not found: ID does not exist" containerID="b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.869467 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582"} err="failed to get container status \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": rpc error: code = NotFound desc = could not find container \"b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582\": container with ID starting with b371864dc0f6bed27b7dd0241e80a1515afa20112e1afb79e99526e61c60c582 not found: ID does not exist" Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.880874 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:38:00 crc kubenswrapper[4766]: I0130 16:38:00.886883 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tkmgw"] Jan 30 16:38:02 crc kubenswrapper[4766]: I0130 16:38:02.046991 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" path="/var/lib/kubelet/pods/cd84aed8-c9c3-4e8d-b212-13955a78d7b4/volumes" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.045132 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.045572 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.699373 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.699566 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.736171 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:09 crc kubenswrapper[4766]: I0130 16:38:09.926127 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dpb9n" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.605851 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:16 crc kubenswrapper[4766]: E0130 16:38:16.606643 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.606655 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.606775 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd84aed8-c9c3-4e8d-b212-13955a78d7b4" containerName="registry-server" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.607609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.610670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hqb7r" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.623881 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748364 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748493 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.748548 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.849854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.850440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.850673 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.876681 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:16 crc kubenswrapper[4766]: I0130 16:38:16.926062 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.337420 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv"] Jan 30 16:38:17 crc kubenswrapper[4766]: W0130 16:38:17.341712 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbc79777_d574_4d18_953a_6d51b5c2bd84.slice/crio-1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928 WatchSource:0}: Error finding container 1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928: Status 404 returned error can't find the container with id 1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928 Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949796 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="d1e1e63a775334305ecf471f09907034407ab73f21c38b8aaa80d0bed80fd160" exitCode=0 Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"d1e1e63a775334305ecf471f09907034407ab73f21c38b8aaa80d0bed80fd160"} Jan 30 16:38:17 crc kubenswrapper[4766]: I0130 16:38:17.949888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerStarted","Data":"1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928"} Jan 30 16:38:19 crc kubenswrapper[4766]: I0130 16:38:19.973086 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="96028487a00bdf6d6b85da2927b154580a3a4d86e04cccc1442fb4e60a5adc96" exitCode=0 Jan 30 16:38:19 crc kubenswrapper[4766]: I0130 16:38:19.973162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"96028487a00bdf6d6b85da2927b154580a3a4d86e04cccc1442fb4e60a5adc96"} Jan 30 16:38:20 crc kubenswrapper[4766]: I0130 16:38:20.982667 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerID="a0667897374fd1d7d0b96fac3e5ab5303850348d561ffad0c5f2041c5320a561" exitCode=0 Jan 30 16:38:20 crc kubenswrapper[4766]: I0130 16:38:20.982723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"a0667897374fd1d7d0b96fac3e5ab5303850348d561ffad0c5f2041c5320a561"} Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.247278 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.325710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.326155 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.326357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") pod \"cbc79777-d574-4d18-953a-6d51b5c2bd84\" (UID: \"cbc79777-d574-4d18-953a-6d51b5c2bd84\") " Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.327383 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle" (OuterVolumeSpecName: "bundle") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.333169 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764" (OuterVolumeSpecName: "kube-api-access-z5764") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "kube-api-access-z5764". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.342711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util" (OuterVolumeSpecName: "util") pod "cbc79777-d574-4d18-953a-6d51b5c2bd84" (UID: "cbc79777-d574-4d18-953a-6d51b5c2bd84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428005 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428047 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbc79777-d574-4d18-953a-6d51b5c2bd84-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.428056 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5764\" (UniqueName: \"kubernetes.io/projected/cbc79777-d574-4d18-953a-6d51b5c2bd84-kube-api-access-z5764\") on node \"crc\" DevicePath \"\"" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" event={"ID":"cbc79777-d574-4d18-953a-6d51b5c2bd84","Type":"ContainerDied","Data":"1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928"} Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999368 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv" Jan 30 16:38:22 crc kubenswrapper[4766]: I0130 16:38:22.999374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db96f7c4ef988280e169d2272439764ce0f3f81cbcce598bb9f124770611928" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.579154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580022 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="util" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580039 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="util" Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580059 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580067 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: E0130 16:38:28.580082 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="pull" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580091 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="pull" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580234 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc79777-d574-4d18-953a-6d51b5c2bd84" containerName="extract" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.580659 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.582706 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vpl4v" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.600522 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.633482 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.734568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.762552 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62wz\" (UniqueName: \"kubernetes.io/projected/e1df6663-4a1f-4900-8eba-215a6f08beb0-kube-api-access-c62wz\") pod \"openstack-operator-controller-init-5c7c85d9bc-85t58\" (UID: \"e1df6663-4a1f-4900-8eba-215a6f08beb0\") " pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:28 crc kubenswrapper[4766]: I0130 16:38:28.900901 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:29 crc kubenswrapper[4766]: I0130 16:38:29.396564 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58"] Jan 30 16:38:30 crc kubenswrapper[4766]: I0130 16:38:30.060239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" event={"ID":"e1df6663-4a1f-4900-8eba-215a6f08beb0","Type":"ContainerStarted","Data":"751f479300a9badf6846b64d74f180b7def8679b381884837e34197959023b59"} Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.103491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" event={"ID":"e1df6663-4a1f-4900-8eba-215a6f08beb0","Type":"ContainerStarted","Data":"1f4c9771221d4d4aa209204af8c2d10f36e887f858fdcba2df171f5191f3966c"} Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.104050 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:38:35 crc kubenswrapper[4766]: I0130 16:38:35.134937 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" podStartSLOduration=1.89751764 podStartE2EDuration="7.134916884s" podCreationTimestamp="2026-01-30 16:38:28 +0000 UTC" firstStartedPulling="2026-01-30 16:38:29.398818086 +0000 UTC m=+964.036775432" lastFinishedPulling="2026-01-30 16:38:34.63621733 +0000 UTC m=+969.274174676" observedRunningTime="2026-01-30 16:38:35.129547205 +0000 UTC m=+969.767504551" watchObservedRunningTime="2026-01-30 16:38:35.134916884 +0000 UTC m=+969.772874230" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046057 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046687 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.046753 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.047493 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:38:39 crc kubenswrapper[4766]: I0130 16:38:39.047556 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" gracePeriod=600 Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140599 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" exitCode=0 Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140647 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf"} Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} Jan 30 16:38:40 crc kubenswrapper[4766]: I0130 16:38:40.140980 4766 scope.go:117] "RemoveContainer" containerID="2324c4835fd4bdd1303bb3b79291e1e367ad78303906e6548593c60cc4a66d08" Jan 30 16:38:48 crc kubenswrapper[4766]: I0130 16:38:48.903881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c7c85d9bc-85t58" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.238032 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.239571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.242072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-z9hxc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.247802 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.248955 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.251009 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4qwsx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.257103 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.258372 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.261820 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-x572m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.273577 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.308081 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.323261 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.327940 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.328250 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.328393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.349745 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.351132 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.357665 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-m546p" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.361247 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.362370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.364675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-88q26" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.369228 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.387571 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.408030 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.409012 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.411985 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dfdjc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432908 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.432943 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.440279 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.441069 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.444880 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7lj62" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.445091 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.463257 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.471691 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.486722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s8v\" (UniqueName: \"kubernetes.io/projected/c610cc53-6813-4c5b-86e9-b421aaa21666-kube-api-access-z6s8v\") pod \"designate-operator-controller-manager-8f4c5cb64-rjgtk\" (UID: \"c610cc53-6813-4c5b-86e9-b421aaa21666\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.492880 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29ph\" (UniqueName: \"kubernetes.io/projected/46a7c725-b480-4f85-91d0-24831e713b26-kube-api-access-d29ph\") pod \"barbican-operator-controller-manager-fc589b45f-ssl7s\" (UID: \"46a7c725-b480-4f85-91d0-24831e713b26\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.495996 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.497312 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.502865 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-s58qv" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.515079 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6r4\" (UniqueName: \"kubernetes.io/projected/72b84e1c-8ed8-4fae-8dff-ca2576579904-kube-api-access-lh6r4\") pod \"cinder-operator-controller-manager-787499fbb-mlkcx\" (UID: \"72b84e1c-8ed8-4fae-8dff-ca2576579904\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.539426 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544254 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.544401 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.559634 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.561433 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.562543 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.569341 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.570249 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.575558 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fwd4v" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.576226 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-765t7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.577613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.600757 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.601692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.637299 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7dbw\" (UniqueName: \"kubernetes.io/projected/d34f90ce-9c03-441f-85cb-67b1666672fc-kube-api-access-s7dbw\") pod \"glance-operator-controller-manager-6bfc9d4d48-7287m\" (UID: \"d34f90ce-9c03-441f-85cb-67b1666672fc\") " pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.637874 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsz4j\" (UniqueName: \"kubernetes.io/projected/2a5fe995-2904-4751-ae74-958efaa8596a-kube-api-access-vsz4j\") pod \"heat-operator-controller-manager-65dc6c8d9c-8hrwp\" (UID: \"2a5fe995-2904-4751-ae74-958efaa8596a\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.655660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.659878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.660062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.660145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: E0130 16:39:07.660228 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:07 crc kubenswrapper[4766]: E0130 16:39:07.660313 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:08.160292789 +0000 UTC m=+1002.798250135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.667557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.667680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.699473 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.703334 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.720087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwzb\" (UniqueName: \"kubernetes.io/projected/be908bdc-d0b5-4409-b088-b9b51de3cfb0-kube-api-access-nnwzb\") pod \"horizon-operator-controller-manager-5fb775575f-lhxhc\" (UID: \"be908bdc-d0b5-4409-b088-b9b51de3cfb0\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.731229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zvn\" (UniqueName: \"kubernetes.io/projected/16fd0d31-da4c-4c6b-bbc4-8302daee3ee5-kube-api-access-f6zvn\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-jhbv7\" (UID: \"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.732589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7d9\" (UniqueName: \"kubernetes.io/projected/09fcb126-016c-4b79-91d5-90e98e3da7f3-kube-api-access-dc7d9\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.732929 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.756504 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.757483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.761003 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8x6s2" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.796888 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.799973 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.800056 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.808359 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.809597 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.812616 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dhd5w" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.819271 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.826101 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.841114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs6qh\" (UniqueName: \"kubernetes.io/projected/0974b654-1fc0-4d97-9be3-eca153de4c57-kube-api-access-zs6qh\") pod \"manila-operator-controller-manager-7d96d95959-l4pbc\" (UID: \"0974b654-1fc0-4d97-9be3-eca153de4c57\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.854950 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-867g6\" (UniqueName: \"kubernetes.io/projected/b0db2f42-5872-4cac-9ee0-5990c49e0a26-kube-api-access-867g6\") pod \"keystone-operator-controller-manager-64469b487f-xkfn6\" (UID: \"b0db2f42-5872-4cac-9ee0-5990c49e0a26\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.856290 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.857435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.860579 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.866987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-p9fn5" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.876268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.883779 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.884979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.892455 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.895198 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-n8kpd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.901571 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.901648 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.903816 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.909878 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.911956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l2sxb" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.919836 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.931838 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.932998 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.935832 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.936166 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4c97f" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.953548 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.955505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.960631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.976121 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.982687 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:07 crc kubenswrapper[4766]: I0130 16:39:07.988150 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.004452 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z5dp6" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.004738 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.005889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006844 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.006962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.012859 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6qgsq" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.067038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7m9x\" (UniqueName: \"kubernetes.io/projected/1ea9d2ea-ca11-428c-ab61-28bf391bcd4f-kube-api-access-r7m9x\") pod \"mariadb-operator-controller-manager-67bf948998-jzztd\" (UID: \"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.091126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgw69\" (UniqueName: \"kubernetes.io/projected/d4c39f8d-f83d-4311-bb99-24dfa7eaeafd-kube-api-access-pgw69\") pod \"neutron-operator-controller-manager-576995988b-kkvlj\" (UID: \"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.092435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.098245 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106486 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106650 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.106911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107918 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.107984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.108085 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.112473 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mb7mw" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.118993 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.145146 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.145285 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.146339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.152051 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8vj\" (UniqueName: \"kubernetes.io/projected/0582a100-4b50-452f-baca-e67b4d6f2891-kube-api-access-2d8vj\") pod \"nova-operator-controller-manager-5644b66645-6jc7f\" (UID: \"0582a100-4b50-452f-baca-e67b4d6f2891\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.158689 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sfl6p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.169942 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfnj\" (UniqueName: \"kubernetes.io/projected/a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac-kube-api-access-hbfnj\") pod \"octavia-operator-controller-manager-694c6dcf95-swq4p\" (UID: \"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.170104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj79c\" (UniqueName: \"kubernetes.io/projected/8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90-kube-api-access-nj79c\") pod \"ovn-operator-controller-manager-788c46999f-2jmqd\" (UID: \"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.181213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.182265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.184589 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mxqzz" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.197849 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210098 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210130 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210162 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.210229 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210643 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210684 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:08.710671097 +0000 UTC m=+1003.348628443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210826 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.210851 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.210843832 +0000 UTC m=+1003.848801178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.220842 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.234069 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.275436 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.292784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngcqx\" (UniqueName: \"kubernetes.io/projected/90a2893c-9d38-4d53-93d9-a50421172933-kube-api-access-ngcqx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.294857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9c7\" (UniqueName: \"kubernetes.io/projected/04cf0394-fb7b-41a9-a9bb-6fec8537d393-kube-api-access-ph9c7\") pod \"placement-operator-controller-manager-5b964cf4cd-bm24k\" (UID: \"04cf0394-fb7b-41a9-a9bb-6fec8537d393\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321167 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.321393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.327898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9t4\" (UniqueName: \"kubernetes.io/projected/5eacef6b-7362-4c43-912a-eb3e6ccce6e9-kube-api-access-8x9t4\") pod \"swift-operator-controller-manager-566d8d7445-l44w4\" (UID: \"5eacef6b-7362-4c43-912a-eb3e6ccce6e9\") " pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.344235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.359666 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsmp\" (UniqueName: \"kubernetes.io/projected/0c603c94-f0b0-4820-a5a1-0ab9a76ceb49-kube-api-access-phsmp\") pod \"telemetry-operator-controller-manager-69484b8d9d-tqxks\" (UID: \"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49\") " pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.370375 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.424522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.424656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.447302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.448151 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.459263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9tl7m" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.459639 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.461214 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.473282 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.485822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.501110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdz6\" (UniqueName: \"kubernetes.io/projected/c03d46f4-f454-4b31-b4c7-5c324390d8ec-kube-api-access-krdz6\") pod \"test-operator-controller-manager-56f8bfcd9f-d7xxm\" (UID: \"c03d46f4-f454-4b31-b4c7-5c324390d8ec\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.516475 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.517478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.525587 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-phwmg" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.529894 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.535901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vnv\" (UniqueName: \"kubernetes.io/projected/55fb4fd9-f80b-474b-b9c9-758720536349-kube-api-access-99vnv\") pod \"watcher-operator-controller-manager-586b95b788-dklb4\" (UID: \"55fb4fd9-f80b-474b-b9c9-758720536349\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.551502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.628965 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629386 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.629444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733739 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.733792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.733977 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734032 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.234016931 +0000 UTC m=+1003.871974277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734590 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.734618917 +0000 UTC m=+1004.372576263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734663 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: E0130 16:39:08.734682 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:09.234675989 +0000 UTC m=+1003.872633325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.782857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqblv\" (UniqueName: \"kubernetes.io/projected/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-kube-api-access-kqblv\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.794752 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.846925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxcs\" (UniqueName: \"kubernetes.io/projected/dc1c52ba-db5b-40ac-87da-de36346e8491-kube-api-access-lsxcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-49xwp\" (UID: \"dc1c52ba-db5b-40ac-87da-de36346e8491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:08 crc kubenswrapper[4766]: I0130 16:39:08.970631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.138413 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.248694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.249345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249370 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249472 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:10.249449116 +0000 UTC m=+1004.887406502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.249554 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249645 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249759 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249766 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:11.249718924 +0000 UTC m=+1005.887676280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.249799 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:10.249787225 +0000 UTC m=+1004.887744621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.354194 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" event={"ID":"72b84e1c-8ed8-4fae-8dff-ca2576579904","Type":"ContainerStarted","Data":"60589c57f1b9fc748ea034d80c5d0190674d723fc5ce9c74e34d6da7c3f4f1f4"} Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.626210 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.629603 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd34f90ce_9c03_441f_85cb_67b1666672fc.slice/crio-78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281 WatchSource:0}: Error finding container 78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281: Status 404 returned error can't find the container with id 78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281 Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.676105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.686012 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.734505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.767888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.768006 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: E0130 16:39:09.768044 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:11.768031238 +0000 UTC m=+1006.405988584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.878731 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.880779 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fd0d31_da4c_4c6b_bbc4_8302daee3ee5.slice/crio-14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189 WatchSource:0}: Error finding container 14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189: Status 404 returned error can't find the container with id 14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189 Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.910269 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6"] Jan 30 16:39:09 crc kubenswrapper[4766]: I0130 16:39:09.943049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp"] Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.944561 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0db2f42_5872_4cac_9ee0_5990c49e0a26.slice/crio-cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a WatchSource:0}: Error finding container cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a: Status 404 returned error can't find the container with id cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a Jan 30 16:39:09 crc kubenswrapper[4766]: W0130 16:39:09.947693 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5fe995_2904_4751_ae74_958efaa8596a.slice/crio-815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d WatchSource:0}: Error finding container 815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d: Status 404 returned error can't find the container with id 815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.246944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.255847 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc1c52ba_db5b_40ac_87da_de36346e8491.slice/crio-61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82 WatchSource:0}: Error finding container 61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82: Status 404 returned error can't find the container with id 61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82 Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.273408 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.276429 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.276521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.276739 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.276805 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:12.276785959 +0000 UTC m=+1006.914743305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.277234 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.277278 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:12.277265262 +0000 UTC m=+1006.915222618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.281369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.282125 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ea9d2ea_ca11_428c_ab61_28bf391bcd4f.slice/crio-ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516 WatchSource:0}: Error finding container ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516: Status 404 returned error can't find the container with id ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516 Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.289345 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks"] Jan 30 16:39:10 crc kubenswrapper[4766]: W0130 16:39:10.295225 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0582a100_4b50_452f_baca_e67b4d6f2891.slice/crio-8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d WatchSource:0}: Error finding container 8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d: Status 404 returned error can't find the container with id 8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.300365 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.312753 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.313902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.327911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd"] Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335217 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x9t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-566d8d7445-l44w4_openstack-operators(5eacef6b-7362-4c43-912a-eb3e6ccce6e9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335459 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krdz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-d7xxm_openstack-operators(c03d46f4-f454-4b31-b4c7-5c324390d8ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335569 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgw69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-kkvlj_openstack-operators(d4c39f8d-f83d-4311-bb99-24dfa7eaeafd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.335625 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj79c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-2jmqd_openstack-operators(8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336363 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336550 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336631 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.336767 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.338956 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbfnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-swq4p_openstack-operators(a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.340162 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.339372 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.346116 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.351846 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p"] Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.358067 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k"] Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.363324 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ph9c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-bm24k_openstack-operators(04cf0394-fb7b-41a9-a9bb-6fec8537d393): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.365457 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.371911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" event={"ID":"5eacef6b-7362-4c43-912a-eb3e6ccce6e9","Type":"ContainerStarted","Data":"899985ba76be1bd97e3368a75f10c705e148b92f9398cd0b6c3068ca08fc87f5"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.374111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" event={"ID":"b0db2f42-5872-4cac-9ee0-5990c49e0a26","Type":"ContainerStarted","Data":"cd74f86235f57e1705da50e756093414bd5f0451e027dd23c4b5f8e5e8291f3a"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.375984 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.376799 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" event={"ID":"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5","Type":"ContainerStarted","Data":"14a496dd48f87c96d1b8058ba219826e827118612b8ac815646c6233d4808189"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.388254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" event={"ID":"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd","Type":"ContainerStarted","Data":"12228b9af4d4bc308023e3a775d30e74e57110d29bbdb312f4dfd3ff0fdf0937"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.389835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.389874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" event={"ID":"46a7c725-b480-4f85-91d0-24831e713b26","Type":"ContainerStarted","Data":"a9655c98b93f1c61d3ee397fadce9fec766d22834caae8357abed7842d073c57"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.394753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" event={"ID":"c610cc53-6813-4c5b-86e9-b421aaa21666","Type":"ContainerStarted","Data":"2d081a00177fa8db702929028c1e6c6cc9bf4739ea0af40d37d12f283db1f362"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.398227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" event={"ID":"0582a100-4b50-452f-baca-e67b4d6f2891","Type":"ContainerStarted","Data":"8e36a40a79788ad39b40df248cb868f5e29ed28ba048dbfc39e64872ad098a7d"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.399409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" event={"ID":"0974b654-1fc0-4d97-9be3-eca153de4c57","Type":"ContainerStarted","Data":"fa2a385a8a979eb1c6d6d2cac44e589554ccfefa16fea363575c39fa4ff71408"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.400081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" event={"ID":"be908bdc-d0b5-4409-b088-b9b51de3cfb0","Type":"ContainerStarted","Data":"ae8ac28bd87773b8c1ed6ee0840f4603e2667361073061ffb4cd37d61bd128a6"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.400871 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" event={"ID":"04cf0394-fb7b-41a9-a9bb-6fec8537d393","Type":"ContainerStarted","Data":"2e89879bd07df8599e3d309460b4b5fbe981645728ecad1e2e363383ab955328"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.403431 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.429719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" event={"ID":"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90","Type":"ContainerStarted","Data":"e536f2a5c1eeb77d5973f93e8028eaed0a69d0a6f92bc0b7a0d7de95799e8aa2"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.438147 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.451405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" event={"ID":"c03d46f4-f454-4b31-b4c7-5c324390d8ec","Type":"ContainerStarted","Data":"aab8c41ae82cca782bf11aef1eedcb1a459f6f217a5d4750d6ba2674dee810fb"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.452907 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.458994 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" event={"ID":"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f","Type":"ContainerStarted","Data":"ec801056a6ec4765c2bd8df17157fa6fa4a55d01facd0540bec960ba4b960516"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.479240 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" event={"ID":"d34f90ce-9c03-441f-85cb-67b1666672fc","Type":"ContainerStarted","Data":"78e165d7b3a6c6b87b2f5d0c693b5622d778b4290226302193c8dfbb9b0cd281"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.480879 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" event={"ID":"2a5fe995-2904-4751-ae74-958efaa8596a","Type":"ContainerStarted","Data":"815dff27c6c9d212aded026399c51d20dbe39dec8cd198d83ead0d32051d1b6d"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.484404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" event={"ID":"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49","Type":"ContainerStarted","Data":"bc7d35dda2a93701d1d3d95881e3790fdb4b9319a75d9df49175b79e4e1e2b7c"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.489550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" event={"ID":"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac","Type":"ContainerStarted","Data":"f15be8d71672a3a992472c4cb823c9521797bff25d38501e31b1d8887a39cfb0"} Jan 30 16:39:10 crc kubenswrapper[4766]: E0130 16:39:10.492534 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.493245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" event={"ID":"dc1c52ba-db5b-40ac-87da-de36346e8491","Type":"ContainerStarted","Data":"61a50f674369938611d93cd1201b24380e4fe6ef1b3fb05b8a28ef44fd6a6a82"} Jan 30 16:39:10 crc kubenswrapper[4766]: I0130 16:39:10.495368 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" event={"ID":"55fb4fd9-f80b-474b-b9c9-758720536349","Type":"ContainerStarted","Data":"72b9ee2bece212e3d66ee55f14cfaef4a0a15ee460eb0044c173d03fc5537ad3"} Jan 30 16:39:11 crc kubenswrapper[4766]: I0130 16:39:11.291925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.292120 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.292169 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:15.292152184 +0000 UTC m=+1009.930109530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.545243 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.579931 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podUID="04cf0394-fb7b-41a9-a9bb-6fec8537d393" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.579932 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580018 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580076 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:e5570727bc92a0d4d95be8232fa9ccad32e212f77538a1bf5319b6e951be2011\\\"\"" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podUID="5eacef6b-7362-4c43-912a-eb3e6ccce6e9" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.580123 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:11 crc kubenswrapper[4766]: I0130 16:39:11.806902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.807204 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:11 crc kubenswrapper[4766]: E0130 16:39:11.807265 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:15.80724832 +0000 UTC m=+1010.445205666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: I0130 16:39:12.321095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:12 crc kubenswrapper[4766]: I0130 16:39:12.321202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321284 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321297 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321351 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:16.321332358 +0000 UTC m=+1010.959289704 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:12 crc kubenswrapper[4766]: E0130 16:39:12.321370 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:16.321363149 +0000 UTC m=+1010.959320495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: I0130 16:39:15.300688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.300985 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.301154 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:23.301135054 +0000 UTC m=+1017.939092400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: I0130 16:39:15.818057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.818287 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:15 crc kubenswrapper[4766]: E0130 16:39:15.818350 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:23.818332137 +0000 UTC m=+1018.456289483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: I0130 16:39:16.334020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:16 crc kubenswrapper[4766]: I0130 16:39:16.334223 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334380 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:24.334422062 +0000 UTC m=+1018.972379408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334876 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:16 crc kubenswrapper[4766]: E0130 16:39:16.334910 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:24.334899765 +0000 UTC m=+1018.972857111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.041915 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.358938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.359164 4766 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.359231 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert podName:09fcb126-016c-4b79-91d5-90e98e3da7f3 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:39.359215009 +0000 UTC m=+1033.997172355 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert") pod "infra-operator-controller-manager-79955696d6-ddthn" (UID: "09fcb126-016c-4b79-91d5-90e98e3da7f3") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: I0130 16:39:23.867698 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.868261 4766 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:23 crc kubenswrapper[4766]: E0130 16:39:23.868438 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert podName:90a2893c-9d38-4d53-93d9-a50421172933 nodeName:}" failed. No retries permitted until 2026-01-30 16:39:39.868411743 +0000 UTC m=+1034.506369169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" (UID: "90a2893c-9d38-4d53-93d9-a50421172933") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: I0130 16:39:24.374289 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:24 crc kubenswrapper[4766]: I0130 16:39:24.374362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374482 4766 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374535 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:40.374518562 +0000 UTC m=+1035.012475908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "metrics-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374482 4766 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:39:24 crc kubenswrapper[4766]: E0130 16:39:24.374707 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs podName:b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f nodeName:}" failed. No retries permitted until 2026-01-30 16:39:40.374675176 +0000 UTC m=+1035.012632522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs") pod "openstack-operator-controller-manager-86bf68df65-m95g8" (UID: "b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f") : secret "webhook-server-cert" not found Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.103812 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.104069 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-867g6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-64469b487f-xkfn6_openstack-operators(b0db2f42-5872-4cac-9ee0-5990c49e0a26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.105270 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podUID="b0db2f42-5872-4cac-9ee0-5990c49e0a26" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.685497 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podUID="b0db2f42-5872-4cac-9ee0-5990c49e0a26" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.793813 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.794301 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zs6qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7d96d95959-l4pbc_openstack-operators(0974b654-1fc0-4d97-9be3-eca153de4c57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:25 crc kubenswrapper[4766]: E0130 16:39:25.795692 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podUID="0974b654-1fc0-4d97-9be3-eca153de4c57" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.691649 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:ebc99d4caf2352643c25de5816c34dfe551961e39261e26ff89ee0afdd98819c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podUID="0974b654-1fc0-4d97-9be3-eca153de4c57" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.968756 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.968997 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsz4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-65dc6c8d9c-8hrwp_openstack-operators(2a5fe995-2904-4751-ae74-958efaa8596a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:26 crc kubenswrapper[4766]: E0130 16:39:26.970207 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podUID="2a5fe995-2904-4751-ae74-958efaa8596a" Jan 30 16:39:27 crc kubenswrapper[4766]: E0130 16:39:27.736524 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:b0b0a4b7f190695830d9c85683e48bf60edfc52a3d095afee09ef2619c4a7d28\\\"\"" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podUID="2a5fe995-2904-4751-ae74-958efaa8596a" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.967134 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.968408 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2d8vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5644b66645-6jc7f_openstack-operators(0582a100-4b50-452f-baca-e67b4d6f2891): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:36 crc kubenswrapper[4766]: E0130 16:39:36.969668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podUID="0582a100-4b50-452f-baca-e67b4d6f2891" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.181072 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.181290 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99vnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-dklb4_openstack-operators(55fb4fd9-f80b-474b-b9c9-758720536349): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.182649 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podUID="55fb4fd9-f80b-474b-b9c9-758720536349" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.606107 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.606335 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phsmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-69484b8d9d-tqxks_openstack-operators(0c603c94-f0b0-4820-a5a1-0ab9a76ceb49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.608682 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podUID="0c603c94-f0b0-4820-a5a1-0ab9a76ceb49" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768274 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podUID="55fb4fd9-f80b-474b-b9c9-758720536349" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768314 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podUID="0582a100-4b50-452f-baca-e67b4d6f2891" Jan 30 16:39:37 crc kubenswrapper[4766]: E0130 16:39:37.768526 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:5bca7e1776db32cb5889c1cfca39662741f9c0f531e6d2e52d9d41afb32ae543\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podUID="0c603c94-f0b0-4820-a5a1-0ab9a76ceb49" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.190071 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.190311 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsxcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-49xwp_openstack-operators(dc1c52ba-db5b-40ac-87da-de36346e8491): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.191567 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podUID="dc1c52ba-db5b-40ac-87da-de36346e8491" Jan 30 16:39:38 crc kubenswrapper[4766]: E0130 16:39:38.774804 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podUID="dc1c52ba-db5b-40ac-87da-de36346e8491" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.168167 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.168388 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbfnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-swq4p_openstack-operators(a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.169552 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.421992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.428032 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09fcb126-016c-4b79-91d5-90e98e3da7f3-cert\") pod \"infra-operator-controller-manager-79955696d6-ddthn\" (UID: \"09fcb126-016c-4b79-91d5-90e98e3da7f3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.570075 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7lj62" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.578549 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.667214 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.667406 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgw69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-576995988b-kkvlj_openstack-operators(d4c39f8d-f83d-4311-bb99-24dfa7eaeafd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:39 crc kubenswrapper[4766]: E0130 16:39:39.668791 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.934826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:39 crc kubenswrapper[4766]: I0130 16:39:39.940503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90a2893c-9d38-4d53-93d9-a50421172933-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r\" (UID: \"90a2893c-9d38-4d53-93d9-a50421172933\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.129623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4c97f" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.137272 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.233744 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.233902 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krdz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-d7xxm_openstack-operators(c03d46f4-f454-4b31-b4c7-5c324390d8ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:40 crc kubenswrapper[4766]: E0130 16:39:40.235116 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.443741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.444891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.448074 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-metrics-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.455292 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f-webhook-certs\") pod \"openstack-operator-controller-manager-86bf68df65-m95g8\" (UID: \"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f\") " pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.460909 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9tl7m" Jan 30 16:39:40 crc kubenswrapper[4766]: I0130 16:39:40.469248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.302895 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.303048 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj79c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-2jmqd_openstack-operators(8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:39:41 crc kubenswrapper[4766]: E0130 16:39:41.304344 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.340802 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r"] Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.581968 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ddthn"] Jan 30 16:39:42 crc kubenswrapper[4766]: W0130 16:39:42.591779 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09fcb126_016c_4b79_91d5_90e98e3da7f3.slice/crio-2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa WatchSource:0}: Error finding container 2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa: Status 404 returned error can't find the container with id 2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa Jan 30 16:39:42 crc kubenswrapper[4766]: W0130 16:39:42.673004 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a6f6d6_6e33_4f4c_a0e4_cff7d180eb6f.slice/crio-bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995 WatchSource:0}: Error finding container bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995: Status 404 returned error can't find the container with id bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995 Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.676873 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8"] Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.801772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" event={"ID":"d34f90ce-9c03-441f-85cb-67b1666672fc","Type":"ContainerStarted","Data":"e09e22e7983b7ade69bc147569acb5bc9f1f5d00c149f873a912116fcd2a1764"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.801880 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.804053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" event={"ID":"2a5fe995-2904-4751-ae74-958efaa8596a","Type":"ContainerStarted","Data":"8d570920d17fc1ae12b9b54e55967afb6c83352925ae29443e891db9ad479d3b"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.804285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.809661 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" event={"ID":"0974b654-1fc0-4d97-9be3-eca153de4c57","Type":"ContainerStarted","Data":"f700b6e4fa71d69e6da1639c284016417854dccc06a649be11abc844ee20d6d0"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.809884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.811827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" event={"ID":"be908bdc-d0b5-4409-b088-b9b51de3cfb0","Type":"ContainerStarted","Data":"88b672a2abad1a3cd100abd06985751753dedf6f1e8d215f28e92c387886bbc6"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.811965 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.813438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" event={"ID":"09fcb126-016c-4b79-91d5-90e98e3da7f3","Type":"ContainerStarted","Data":"2f91eabc889a8812d95efd41107ecbe1bdc9f26c344a244bc3fa0324cde6a0fa"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.816305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" event={"ID":"b0db2f42-5872-4cac-9ee0-5990c49e0a26","Type":"ContainerStarted","Data":"558205bfda960ba437e4ea5b8dbed5e5538b95235f953071b86f9166d2d19f42"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.816564 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.818663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" event={"ID":"04cf0394-fb7b-41a9-a9bb-6fec8537d393","Type":"ContainerStarted","Data":"65e18457d3c4da2d57d465a1bcc526961ce800ed5cd460dd9b1705c353812612"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.819324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.825990 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" event={"ID":"90a2893c-9d38-4d53-93d9-a50421172933","Type":"ContainerStarted","Data":"450e087d04a70c9f4aeb61b4b0b4d183ea1b4016a502df440a52344ca81b1820"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.839598 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" event={"ID":"72b84e1c-8ed8-4fae-8dff-ca2576579904","Type":"ContainerStarted","Data":"5a4bf8b1f9323c54345c7c674d450b58f467e170c1cca91fe80e41bb3406bb6b"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.839887 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.842791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" event={"ID":"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f","Type":"ContainerStarted","Data":"bb58f41ba0d7bc3497daaf361500782aaaecb85be2e543a6e8d3c7f64e671995"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.848299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" event={"ID":"46a7c725-b480-4f85-91d0-24831e713b26","Type":"ContainerStarted","Data":"a92a386f79e80bf4e17b0f08c9f8b25ac6bdd2650b6693d190dfdbfd0c8af1f3"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.848641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.849922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" event={"ID":"5eacef6b-7362-4c43-912a-eb3e6ccce6e9","Type":"ContainerStarted","Data":"edd4304b41768580abd756eb069e1a9c8ea3a70a213fa5b0ec1e8062c8b94772"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.850141 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.851559 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" event={"ID":"16fd0d31-da4c-4c6b-bbc4-8302daee3ee5","Type":"ContainerStarted","Data":"b5792aa3ea345aaa15ab6ddce02fdd1c22763f18607958ecf444c1356b879ebd"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.851684 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.858341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" event={"ID":"c610cc53-6813-4c5b-86e9-b421aaa21666","Type":"ContainerStarted","Data":"fae208f85bbc6e2b44880a03acbc9965f6ac7869b9ba96a6a68a262f79ef1375"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.858492 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.859778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" event={"ID":"1ea9d2ea-ca11-428c-ab61-28bf391bcd4f","Type":"ContainerStarted","Data":"3afe49a430f5396920c65317dca74e2283cffca82f0e43e73ee572be0cb9ea13"} Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.859851 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.867450 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" podStartSLOduration=5.278968101 podStartE2EDuration="35.86743241s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.634691396 +0000 UTC m=+1004.272648742" lastFinishedPulling="2026-01-30 16:39:40.223155705 +0000 UTC m=+1034.861113051" observedRunningTime="2026-01-30 16:39:42.839241934 +0000 UTC m=+1037.477199280" watchObservedRunningTime="2026-01-30 16:39:42.86743241 +0000 UTC m=+1037.505389756" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.894109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" podStartSLOduration=4.110497572 podStartE2EDuration="35.894090745s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.286649291 +0000 UTC m=+1004.924606637" lastFinishedPulling="2026-01-30 16:39:42.070242464 +0000 UTC m=+1036.708199810" observedRunningTime="2026-01-30 16:39:42.865821365 +0000 UTC m=+1037.503778721" watchObservedRunningTime="2026-01-30 16:39:42.894090745 +0000 UTC m=+1037.532048091" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.897933 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" podStartSLOduration=3.772302867 podStartE2EDuration="35.8979182s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.955299636 +0000 UTC m=+1004.593256982" lastFinishedPulling="2026-01-30 16:39:42.080914969 +0000 UTC m=+1036.718872315" observedRunningTime="2026-01-30 16:39:42.896035878 +0000 UTC m=+1037.533993224" watchObservedRunningTime="2026-01-30 16:39:42.8979182 +0000 UTC m=+1037.535875546" Jan 30 16:39:42 crc kubenswrapper[4766]: I0130 16:39:42.938578 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" podStartSLOduration=3.822224051 podStartE2EDuration="35.938554948s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.953731102 +0000 UTC m=+1004.591688448" lastFinishedPulling="2026-01-30 16:39:42.070061999 +0000 UTC m=+1036.708019345" observedRunningTime="2026-01-30 16:39:42.93204714 +0000 UTC m=+1037.570004486" watchObservedRunningTime="2026-01-30 16:39:42.938554948 +0000 UTC m=+1037.576512294" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.029651 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" podStartSLOduration=4.278456046 podStartE2EDuration="36.029633927s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.363167468 +0000 UTC m=+1005.001124814" lastFinishedPulling="2026-01-30 16:39:42.114345339 +0000 UTC m=+1036.752302695" observedRunningTime="2026-01-30 16:39:42.977982445 +0000 UTC m=+1037.615939801" watchObservedRunningTime="2026-01-30 16:39:43.029633927 +0000 UTC m=+1037.667591273" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.417712 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" podStartSLOduration=5.950941508 podStartE2EDuration="36.417685834s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.756310606 +0000 UTC m=+1004.394267952" lastFinishedPulling="2026-01-30 16:39:40.223054932 +0000 UTC m=+1034.861012278" observedRunningTime="2026-01-30 16:39:43.403449472 +0000 UTC m=+1038.041406818" watchObservedRunningTime="2026-01-30 16:39:43.417685834 +0000 UTC m=+1038.055643180" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.523927 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" podStartSLOduration=7.991083155 podStartE2EDuration="36.52390662s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.294620951 +0000 UTC m=+1004.932578297" lastFinishedPulling="2026-01-30 16:39:38.827444426 +0000 UTC m=+1033.465401762" observedRunningTime="2026-01-30 16:39:43.515387345 +0000 UTC m=+1038.153344691" watchObservedRunningTime="2026-01-30 16:39:43.52390662 +0000 UTC m=+1038.161863966" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.673649 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" podStartSLOduration=4.559855796 podStartE2EDuration="36.673631994s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.173339339 +0000 UTC m=+1003.811296685" lastFinishedPulling="2026-01-30 16:39:41.287115537 +0000 UTC m=+1035.925072883" observedRunningTime="2026-01-30 16:39:43.624415928 +0000 UTC m=+1038.262373284" watchObservedRunningTime="2026-01-30 16:39:43.673631994 +0000 UTC m=+1038.311589340" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.676418 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" podStartSLOduration=6.906633208 podStartE2EDuration="36.6764049s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.883250791 +0000 UTC m=+1004.521208137" lastFinishedPulling="2026-01-30 16:39:39.653022483 +0000 UTC m=+1034.290979829" observedRunningTime="2026-01-30 16:39:43.663638498 +0000 UTC m=+1038.301595844" watchObservedRunningTime="2026-01-30 16:39:43.6764049 +0000 UTC m=+1038.314362246" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.705108 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" podStartSLOduration=7.58458048 podStartE2EDuration="36.70509155s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.706912705 +0000 UTC m=+1004.344870071" lastFinishedPulling="2026-01-30 16:39:38.827423795 +0000 UTC m=+1033.465381141" observedRunningTime="2026-01-30 16:39:43.701963844 +0000 UTC m=+1038.339921190" watchObservedRunningTime="2026-01-30 16:39:43.70509155 +0000 UTC m=+1038.343048896" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.750861 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" podStartSLOduration=6.80542964 podStartE2EDuration="36.750841869s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:09.707570913 +0000 UTC m=+1004.345528269" lastFinishedPulling="2026-01-30 16:39:39.652983152 +0000 UTC m=+1034.290940498" observedRunningTime="2026-01-30 16:39:43.74612337 +0000 UTC m=+1038.384080716" watchObservedRunningTime="2026-01-30 16:39:43.750841869 +0000 UTC m=+1038.388799215" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.794886 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" podStartSLOduration=5.075912468 podStartE2EDuration="36.794865022s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335054964 +0000 UTC m=+1004.973012310" lastFinishedPulling="2026-01-30 16:39:42.054007518 +0000 UTC m=+1036.691964864" observedRunningTime="2026-01-30 16:39:43.784480856 +0000 UTC m=+1038.422438202" watchObservedRunningTime="2026-01-30 16:39:43.794865022 +0000 UTC m=+1038.432822368" Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.884160 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" event={"ID":"b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f","Type":"ContainerStarted","Data":"307cbff2f69062a50f7d2778ca3b52e7ec43b33e168a7b97c89925fb02a677a9"} Jan 30 16:39:43 crc kubenswrapper[4766]: I0130 16:39:43.945731 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" podStartSLOduration=35.945709637 podStartE2EDuration="35.945709637s" podCreationTimestamp="2026-01-30 16:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:39:43.93822915 +0000 UTC m=+1038.576186496" watchObservedRunningTime="2026-01-30 16:39:43.945709637 +0000 UTC m=+1038.583666983" Jan 30 16:39:44 crc kubenswrapper[4766]: I0130 16:39:44.908625 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.565723 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-ssl7s" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.582043 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-mlkcx" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.611407 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-rjgtk" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.706427 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6bfc9d4d48-7287m" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.709513 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-8hrwp" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.735083 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-lhxhc" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.867920 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-jhbv7" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.945438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" event={"ID":"90a2893c-9d38-4d53-93d9-a50421172933","Type":"ContainerStarted","Data":"1fcfb977a25dff299c6eb2e51ec9cd97ae99dae6c59d3a0b8bfaf953de13761d"} Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.945503 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.946938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" event={"ID":"09fcb126-016c-4b79-91d5-90e98e3da7f3","Type":"ContainerStarted","Data":"c55582d89c1c0ace67353aaa342e8b71119e0df6b43182ad4e5f341814f87e18"} Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.947189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.965668 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-l4pbc" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.979972 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" podStartSLOduration=36.024514376 podStartE2EDuration="40.979952003s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:42.365844966 +0000 UTC m=+1037.003802312" lastFinishedPulling="2026-01-30 16:39:47.321282593 +0000 UTC m=+1041.959239939" observedRunningTime="2026-01-30 16:39:47.97077876 +0000 UTC m=+1042.608736106" watchObservedRunningTime="2026-01-30 16:39:47.979952003 +0000 UTC m=+1042.617909349" Jan 30 16:39:47 crc kubenswrapper[4766]: I0130 16:39:47.989014 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-xkfn6" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.038125 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" podStartSLOduration=36.308273741 podStartE2EDuration="41.038103444s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:42.594552875 +0000 UTC m=+1037.232510221" lastFinishedPulling="2026-01-30 16:39:47.324382578 +0000 UTC m=+1041.962339924" observedRunningTime="2026-01-30 16:39:48.029364474 +0000 UTC m=+1042.667321820" watchObservedRunningTime="2026-01-30 16:39:48.038103444 +0000 UTC m=+1042.676060790" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.095981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-jzztd" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.346983 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bm24k" Jan 30 16:39:48 crc kubenswrapper[4766]: I0130 16:39:48.373332 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-566d8d7445-l44w4" Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.965820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" event={"ID":"0582a100-4b50-452f-baca-e67b4d6f2891","Type":"ContainerStarted","Data":"75dfd5a4a476f5dc63034938322a4851a7df22523ca53c97a008085c9a1540ac"} Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.967614 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:49 crc kubenswrapper[4766]: I0130 16:39:49.987168 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" podStartSLOduration=3.72412896 podStartE2EDuration="42.987150733s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.30476144 +0000 UTC m=+1004.942718786" lastFinishedPulling="2026-01-30 16:39:49.567783213 +0000 UTC m=+1044.205740559" observedRunningTime="2026-01-30 16:39:49.984528271 +0000 UTC m=+1044.622485617" watchObservedRunningTime="2026-01-30 16:39:49.987150733 +0000 UTC m=+1044.625108079" Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.476378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bf68df65-m95g8" Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.973968 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" event={"ID":"55fb4fd9-f80b-474b-b9c9-758720536349","Type":"ContainerStarted","Data":"6772c34b63fb25310f7fb07c2b13db0b2c7e0b518065a85bba060f0f1f999c42"} Jan 30 16:39:50 crc kubenswrapper[4766]: I0130 16:39:50.974598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:51 crc kubenswrapper[4766]: E0130 16:39:51.040590 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podUID="a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.055931 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" podStartSLOduration=4.333985016 podStartE2EDuration="44.055913177s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.312589045 +0000 UTC m=+1004.950546381" lastFinishedPulling="2026-01-30 16:39:50.034517196 +0000 UTC m=+1044.672474542" observedRunningTime="2026-01-30 16:39:50.991079512 +0000 UTC m=+1045.629036858" watchObservedRunningTime="2026-01-30 16:39:51.055913177 +0000 UTC m=+1045.693870523" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.982754 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" event={"ID":"0c603c94-f0b0-4820-a5a1-0ab9a76ceb49","Type":"ContainerStarted","Data":"ba91c91e87fa10084404ed3945fd007525003e77d9737cc0989457e8aa91b7a4"} Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.983224 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:51 crc kubenswrapper[4766]: I0130 16:39:51.999698 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" podStartSLOduration=3.6415919260000003 podStartE2EDuration="44.999678359s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335009793 +0000 UTC m=+1004.972967139" lastFinishedPulling="2026-01-30 16:39:51.693096226 +0000 UTC m=+1046.331053572" observedRunningTime="2026-01-30 16:39:51.995650559 +0000 UTC m=+1046.633607915" watchObservedRunningTime="2026-01-30 16:39:51.999678359 +0000 UTC m=+1046.637635705" Jan 30 16:39:52 crc kubenswrapper[4766]: E0130 16:39:52.040026 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:32d8aa084f9ca6788a465b65a4575f7a3bb38255c30c849c955e9173b4351ef2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podUID="d4c39f8d-f83d-4311-bb99-24dfa7eaeafd" Jan 30 16:39:52 crc kubenswrapper[4766]: E0130 16:39:52.040427 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podUID="c03d46f4-f454-4b31-b4c7-5c324390d8ec" Jan 30 16:39:53 crc kubenswrapper[4766]: E0130 16:39:53.040804 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podUID="8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90" Jan 30 16:39:55 crc kubenswrapper[4766]: I0130 16:39:55.010842 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" event={"ID":"dc1c52ba-db5b-40ac-87da-de36346e8491","Type":"ContainerStarted","Data":"20bd4af3c341e4f3016a83e89e09799067fe9f419c9e5d74103a386bf16e6711"} Jan 30 16:39:55 crc kubenswrapper[4766]: I0130 16:39:55.030723 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-49xwp" podStartSLOduration=3.712452768 podStartE2EDuration="47.030690525s" podCreationTimestamp="2026-01-30 16:39:08 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.260932613 +0000 UTC m=+1004.898889949" lastFinishedPulling="2026-01-30 16:39:53.57917036 +0000 UTC m=+1048.217127706" observedRunningTime="2026-01-30 16:39:55.0290555 +0000 UTC m=+1049.667012846" watchObservedRunningTime="2026-01-30 16:39:55.030690525 +0000 UTC m=+1049.668647881" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.206076 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-6jc7f" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.489301 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-69484b8d9d-tqxks" Jan 30 16:39:58 crc kubenswrapper[4766]: I0130 16:39:58.799637 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-dklb4" Jan 30 16:39:59 crc kubenswrapper[4766]: I0130 16:39:59.584831 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ddthn" Jan 30 16:40:00 crc kubenswrapper[4766]: I0130 16:40:00.147901 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.116009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" event={"ID":"8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90","Type":"ContainerStarted","Data":"646c585ca92c0ecda837027335aa38cbb31bdecdb042a33c6d49a09bc43d110e"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.117465 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.118803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" event={"ID":"a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac","Type":"ContainerStarted","Data":"33503755cbeb7db55916eb0f2e9c15282992a3ac49d9479113aeeb520f1c1c3b"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.119220 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.120963 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" event={"ID":"c03d46f4-f454-4b31-b4c7-5c324390d8ec","Type":"ContainerStarted","Data":"f2c6d80928e55486925c0cba8e3b1fbdc73e64062b3f864cebed2d12441d42ac"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.121438 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.122535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" event={"ID":"d4c39f8d-f83d-4311-bb99-24dfa7eaeafd","Type":"ContainerStarted","Data":"9f51c8544da8e8cc080e3c1176f30bd20516683d5fddb56b50e86abde53669db"} Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.122694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.141437 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" podStartSLOduration=3.769448308 podStartE2EDuration="1m1.141386924s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335545628 +0000 UTC m=+1004.973502974" lastFinishedPulling="2026-01-30 16:40:07.707484244 +0000 UTC m=+1062.345441590" observedRunningTime="2026-01-30 16:40:08.141014154 +0000 UTC m=+1062.778971520" watchObservedRunningTime="2026-01-30 16:40:08.141386924 +0000 UTC m=+1062.779344270" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.165774 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" podStartSLOduration=4.024632645 podStartE2EDuration="1m1.165754445s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.338786687 +0000 UTC m=+1004.976744033" lastFinishedPulling="2026-01-30 16:40:07.479908487 +0000 UTC m=+1062.117865833" observedRunningTime="2026-01-30 16:40:08.159869313 +0000 UTC m=+1062.797826669" watchObservedRunningTime="2026-01-30 16:40:08.165754445 +0000 UTC m=+1062.803711791" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.183098 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" podStartSLOduration=3.709216708 podStartE2EDuration="1m1.183078722s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335430854 +0000 UTC m=+1004.973388200" lastFinishedPulling="2026-01-30 16:40:07.809292868 +0000 UTC m=+1062.447250214" observedRunningTime="2026-01-30 16:40:08.17790882 +0000 UTC m=+1062.815866176" watchObservedRunningTime="2026-01-30 16:40:08.183078722 +0000 UTC m=+1062.821036068" Jan 30 16:40:08 crc kubenswrapper[4766]: I0130 16:40:08.195517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" podStartSLOduration=3.821496211 podStartE2EDuration="1m1.195495584s" podCreationTimestamp="2026-01-30 16:39:07 +0000 UTC" firstStartedPulling="2026-01-30 16:39:10.335011793 +0000 UTC m=+1004.972969139" lastFinishedPulling="2026-01-30 16:40:07.709011166 +0000 UTC m=+1062.346968512" observedRunningTime="2026-01-30 16:40:08.192311256 +0000 UTC m=+1062.830268622" watchObservedRunningTime="2026-01-30 16:40:08.195495584 +0000 UTC m=+1062.833452930" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.150624 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-kkvlj" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.223722 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-swq4p" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.278169 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2jmqd" Jan 30 16:40:18 crc kubenswrapper[4766]: I0130 16:40:18.554974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-d7xxm" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.609397 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.611210 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.614445 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6ld2n" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615354 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615462 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.615519 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.650620 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.692914 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.699101 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.702038 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.710217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.710323 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.725262 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.812376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.813325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.849605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"dnsmasq-dns-675f4bcbfc-7v65m\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.913362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.914458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.914602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.928071 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:40:35 crc kubenswrapper[4766]: I0130 16:40:35.939102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"dnsmasq-dns-78dd6ddcc-69ttv\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.020728 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.561653 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:36 crc kubenswrapper[4766]: I0130 16:40:36.656556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:36 crc kubenswrapper[4766]: W0130 16:40:36.659669 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ba5675_86b8_409a_b2f5_c0dbd6b95f2b.slice/crio-7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68 WatchSource:0}: Error finding container 7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68: Status 404 returned error can't find the container with id 7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68 Jan 30 16:40:37 crc kubenswrapper[4766]: I0130 16:40:37.324687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" event={"ID":"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b","Type":"ContainerStarted","Data":"7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68"} Jan 30 16:40:37 crc kubenswrapper[4766]: I0130 16:40:37.326446 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" event={"ID":"900942aa-a667-42dc-9ddf-a1909585c2e3","Type":"ContainerStarted","Data":"4dd23f899f0d12a8b608725fc3a9970423f5d27f8151e6c03d79ba260849d2dc"} Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.119032 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.144473 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.146290 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.153315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.256974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.364832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.365900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.368036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.412665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"dnsmasq-dns-5ccc8479f9-6647p\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.499984 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.513115 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.558872 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.560060 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.598268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674202 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.674607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.775883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.775984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.776057 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.777104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.777104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.816169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"dnsmasq-dns-57d769cc4f-rvfhb\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:38 crc kubenswrapper[4766]: I0130 16:40:38.969396 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.045779 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.045851 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.380842 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.391136 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.394483 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.396443 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.402738 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.402789 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.403042 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.403348 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.404337 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sx6cl" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.405532 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503082 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503157 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503305 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503325 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.503350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.548639 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608466 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608560 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608711 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608736 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.608844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.609269 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.610612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.611060 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.612035 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.612373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.613527 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.618895 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.620853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.623358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.637799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.638429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.734556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.737845 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.756165 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.761203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.765677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774731 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774845 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.774945 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.775300 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.788906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.794968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.802282 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vc5hz" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819098 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819152 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819208 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819248 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819544 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819564 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.819621 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932600 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932658 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932952 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.932990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.933022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.933080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.936964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.938517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.939253 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.942454 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.946147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.948305 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.958043 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.961274 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.961969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.977474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:39 crc kubenswrapper[4766]: I0130 16:40:39.978146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.017517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.148125 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.410604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" event={"ID":"a31c7217-d6d2-4cc1-ab83-016373333c80","Type":"ContainerStarted","Data":"c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5"} Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.411638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" event={"ID":"7c2933e1-c67d-45a6-8e08-fac512f6473b","Type":"ContainerStarted","Data":"a4daa3864bea92099d39184deffbe2394e36c62e86137adfe5c0a64228217582"} Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.511820 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:40:40 crc kubenswrapper[4766]: W0130 16:40:40.596460 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21357e1_82c9_419a_a191_359c84d6d001.slice/crio-3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107 WatchSource:0}: Error finding container 3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107: Status 404 returned error can't find the container with id 3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107 Jan 30 16:40:40 crc kubenswrapper[4766]: I0130 16:40:40.665905 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:40:40 crc kubenswrapper[4766]: W0130 16:40:40.679973 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2a138c_9abd_427b_815c_cbb9e12459f6.slice/crio-737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677 WatchSource:0}: Error finding container 737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677: Status 404 returned error can't find the container with id 737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677 Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.129231 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.130548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.158031 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.158433 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.159751 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.160650 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x2qq7" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.173450 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.181063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294694 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294791 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.294812 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.395945 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396484 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396648 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.396843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.397806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.398928 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.399929 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.401055 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.405736 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.416272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.418805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.425846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.439121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.478205 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.556205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107"} Jan 30 16:40:41 crc kubenswrapper[4766]: I0130 16:40:41.599821 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677"} Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.395812 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: W0130 16:40:42.475710 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62dd6ad1_1550_48cf_b103_b7ab6dd93c97.slice/crio-7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f WatchSource:0}: Error finding container 7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f: Status 404 returned error can't find the container with id 7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.634129 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f"} Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.638222 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.639996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.642781 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zd2kf" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643778 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.643852 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.646825 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746902 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.746942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747262 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.747327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.833353 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.848555 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849876 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849903 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.849966 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.850011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.850801 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.852557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.852748 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854661 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.854872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.857072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fngzp" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.862074 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.863303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.864803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.911843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.918580 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"openstack-cell1-galera-0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.951991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952050 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.952141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:42 crc kubenswrapper[4766]: I0130 16:40:42.975431 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057613 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057777 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.057878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.059453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.059862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.084970 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.084976 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.094162 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"memcached-0\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.281879 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.882575 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:40:43 crc kubenswrapper[4766]: I0130 16:40:43.908237 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:40:43 crc kubenswrapper[4766]: W0130 16:40:43.960153 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f7793d_39bd_4e96_a857_7de972f0c76d.slice/crio-38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2 WatchSource:0}: Error finding container 38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2: Status 404 returned error can't find the container with id 38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2 Jan 30 16:40:44 crc kubenswrapper[4766]: W0130 16:40:44.044207 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ad68dc2_23ff_4044_b74d_149ae8f02bc0.slice/crio-86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3 WatchSource:0}: Error finding container 86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3: Status 404 returned error can't find the container with id 86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3 Jan 30 16:40:44 crc kubenswrapper[4766]: I0130 16:40:44.799132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerStarted","Data":"38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2"} Jan 30 16:40:44 crc kubenswrapper[4766]: I0130 16:40:44.837606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3"} Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.018046 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.019197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.021158 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-db5vw" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.044023 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.134021 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.236250 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.276775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"kube-state-metrics-0\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " pod="openstack/kube-state-metrics-0" Jan 30 16:40:45 crc kubenswrapper[4766]: I0130 16:40:45.356806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:40:46 crc kubenswrapper[4766]: I0130 16:40:46.094564 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:40:46 crc kubenswrapper[4766]: I0130 16:40:46.866695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerStarted","Data":"6ab83b607cb34660892c3f858dbee7a7095d74efd1f6621864cf951d1afb4fc6"} Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.327690 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.329260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335020 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nwj8z" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335384 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.335645 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.338931 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.392224 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.394469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.402709 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418703 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418797 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418851 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.418991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520626 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521496 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.520706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521764 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.521991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.522151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.527131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.536805 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.537388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.547117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"ovn-controller-clmnh\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623775 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623912 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623945 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.623963 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624441 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.624586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.627828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.645059 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"ovn-controller-ovs-l6hkn\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.662073 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.731956 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.861187 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.863346 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866477 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866710 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.866851 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.867095 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.867258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zxvhd" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.882688 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930259 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930326 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:48 crc kubenswrapper[4766]: I0130 16:40:48.930524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033286 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033363 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033400 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033544 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033575 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.033606 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.034616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.037600 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.037944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.039685 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.041375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.042403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.049843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.062140 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.063128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:49 crc kubenswrapper[4766]: I0130 16:40:49.210649 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.125759 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.131968 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.134522 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.136267 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8khlz" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.136712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.137002 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.138787 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.208942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209065 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209395 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209645 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209727 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209801 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.209834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.210004 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312122 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312639 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312725 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.312820 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.313107 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.313944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.314003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.314347 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.324131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.327382 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.327491 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.331711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.343168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:40:52 crc kubenswrapper[4766]: I0130 16:40:52.473568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805193 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805673 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7v65m_openstack(900942aa-a667-42dc-9ddf-a1909585c2e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805163 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.805850 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvdkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rvfhb_openstack(7c2933e1-c67d-45a6-8e08-fac512f6473b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.807040 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" podUID="900942aa-a667-42dc-9ddf-a1909585c2e3" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.807106 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.818515 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.818734 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw52n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-69ttv_openstack(55ba5675-86b8-409a-b2f5-c0dbd6b95f2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:03 crc kubenswrapper[4766]: E0130 16:41:03.820076 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" podUID="55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.047818 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.050604 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.050883 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbx8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(bc2a138c-9abd-427b-815c-cbb9e12459f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:04 crc kubenswrapper[4766]: E0130 16:41:04.052406 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.054037 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.816076 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.816287 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4qrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(62dd6ad1-1550-48cf-b103-b7ab6dd93c97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.817701 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.867727 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.867889 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsqpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-6647p_openstack(a31c7217-d6d2-4cc1-ab83-016373333c80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:41:05 crc kubenswrapper[4766]: E0130 16:41:05.868996 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" Jan 30 16:41:06 crc kubenswrapper[4766]: E0130 16:41:06.060677 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" Jan 30 16:41:06 crc kubenswrapper[4766]: E0130 16:41:06.060975 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.262531 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.265084 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") pod \"900942aa-a667-42dc-9ddf-a1909585c2e3\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409388 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409428 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") pod \"900942aa-a667-42dc-9ddf-a1909585c2e3\" (UID: \"900942aa-a667-42dc-9ddf-a1909585c2e3\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.409707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") pod \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\" (UID: \"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b\") " Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.410257 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.411142 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.411223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config" (OuterVolumeSpecName: "config") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.415080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config" (OuterVolumeSpecName: "config") pod "900942aa-a667-42dc-9ddf-a1909585c2e3" (UID: "900942aa-a667-42dc-9ddf-a1909585c2e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.415873 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr" (OuterVolumeSpecName: "kube-api-access-m5xtr") pod "900942aa-a667-42dc-9ddf-a1909585c2e3" (UID: "900942aa-a667-42dc-9ddf-a1909585c2e3"). InnerVolumeSpecName "kube-api-access-m5xtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.417545 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n" (OuterVolumeSpecName: "kube-api-access-qw52n") pod "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" (UID: "55ba5675-86b8-409a-b2f5-c0dbd6b95f2b"). InnerVolumeSpecName "kube-api-access-qw52n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515587 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900942aa-a667-42dc-9ddf-a1909585c2e3-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515666 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515679 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5xtr\" (UniqueName: \"kubernetes.io/projected/900942aa-a667-42dc-9ddf-a1909585c2e3-kube-api-access-m5xtr\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.515705 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw52n\" (UniqueName: \"kubernetes.io/projected/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b-kube-api-access-qw52n\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.788979 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:41:07 crc kubenswrapper[4766]: I0130 16:41:07.818646 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.073096 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.093038 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" event={"ID":"900942aa-a667-42dc-9ddf-a1909585c2e3","Type":"ContainerDied","Data":"4dd23f899f0d12a8b608725fc3a9970423f5d27f8151e6c03d79ba260849d2dc"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.093103 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7v65m" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.097121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerStarted","Data":"7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.097513 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.101096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.104142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" event={"ID":"55ba5675-86b8-409a-b2f5-c0dbd6b95f2b","Type":"ContainerDied","Data":"7756343962f10d73aa86319a654f52c14c753be69477e9ff822516b343136a68"} Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.104253 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69ttv" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.136210 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.760907124 podStartE2EDuration="26.136146395s" podCreationTimestamp="2026-01-30 16:40:42 +0000 UTC" firstStartedPulling="2026-01-30 16:40:44.015766462 +0000 UTC m=+1098.653723808" lastFinishedPulling="2026-01-30 16:41:07.391005733 +0000 UTC m=+1122.028963079" observedRunningTime="2026-01-30 16:41:08.129992516 +0000 UTC m=+1122.767949862" watchObservedRunningTime="2026-01-30 16:41:08.136146395 +0000 UTC m=+1122.774103741" Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.186679 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.189295 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7v65m"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.251764 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.264399 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69ttv"] Jan 30 16:41:08 crc kubenswrapper[4766]: I0130 16:41:08.422112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:41:08 crc kubenswrapper[4766]: W0130 16:41:08.550050 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a501828_e06b_4096_b555_1ecd9323ee20.slice/crio-f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0 WatchSource:0}: Error finding container f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0: Status 404 returned error can't find the container with id f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0 Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.045638 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.045705 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.112947 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.115132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerStarted","Data":"35bff03af4700c59de26d7f263ff6609c1c1e4962e327e55accdbc5ea2056c14"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.118687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.123474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"e1760b87e9caefe6e9c0ac6d3d9d8457bd91e81888eeb4755458d5a683cbea69"} Jan 30 16:41:09 crc kubenswrapper[4766]: I0130 16:41:09.125532 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"44d944c146c567ab0a586afa23a8e30b46436b5558ae7e1ed7aeb15de65469a1"} Jan 30 16:41:10 crc kubenswrapper[4766]: I0130 16:41:10.050900 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ba5675-86b8-409a-b2f5-c0dbd6b95f2b" path="/var/lib/kubelet/pods/55ba5675-86b8-409a-b2f5-c0dbd6b95f2b/volumes" Jan 30 16:41:10 crc kubenswrapper[4766]: I0130 16:41:10.051435 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900942aa-a667-42dc-9ddf-a1909585c2e3" path="/var/lib/kubelet/pods/900942aa-a667-42dc-9ddf-a1909585c2e3/volumes" Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.156397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerStarted","Data":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.156800 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:41:11 crc kubenswrapper[4766]: I0130 16:41:11.179125 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.156861909 podStartE2EDuration="26.17910614s" podCreationTimestamp="2026-01-30 16:40:45 +0000 UTC" firstStartedPulling="2026-01-30 16:40:46.114594005 +0000 UTC m=+1100.752551351" lastFinishedPulling="2026-01-30 16:41:10.136838236 +0000 UTC m=+1124.774795582" observedRunningTime="2026-01-30 16:41:11.177712382 +0000 UTC m=+1125.815669728" watchObservedRunningTime="2026-01-30 16:41:11.17910614 +0000 UTC m=+1125.817063486" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.172982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.175426 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.177741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.180732 4766 generic.go:334] "Generic (PLEG): container finished" podID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerID="e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387" exitCode=0 Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.180805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.185698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerStarted","Data":"cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7"} Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.186366 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-clmnh" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.223399 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-clmnh" podStartSLOduration=21.022885557 podStartE2EDuration="25.223377541s" podCreationTimestamp="2026-01-30 16:40:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.54913066 +0000 UTC m=+1123.187088016" lastFinishedPulling="2026-01-30 16:41:12.749622654 +0000 UTC m=+1127.387580000" observedRunningTime="2026-01-30 16:41:13.221274813 +0000 UTC m=+1127.859232179" watchObservedRunningTime="2026-01-30 16:41:13.223377541 +0000 UTC m=+1127.861334887" Jan 30 16:41:13 crc kubenswrapper[4766]: I0130 16:41:13.283336 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.197774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerStarted","Data":"83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae"} Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.203528 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4" exitCode=0 Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.203659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4"} Jan 30 16:41:14 crc kubenswrapper[4766]: I0130 16:41:14.224109 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.901606543 podStartE2EDuration="33.224086471s" podCreationTimestamp="2026-01-30 16:40:41 +0000 UTC" firstStartedPulling="2026-01-30 16:40:44.074838139 +0000 UTC m=+1098.712795485" lastFinishedPulling="2026-01-30 16:41:07.397318067 +0000 UTC m=+1122.035275413" observedRunningTime="2026-01-30 16:41:14.219585108 +0000 UTC m=+1128.857542454" watchObservedRunningTime="2026-01-30 16:41:14.224086471 +0000 UTC m=+1128.862043817" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.226924 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerStarted","Data":"68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.233306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerStarted","Data":"0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.242893 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2"} Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.258515 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.084553136 podStartE2EDuration="28.258498851s" podCreationTimestamp="2026-01-30 16:40:47 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.546397254 +0000 UTC m=+1123.184354600" lastFinishedPulling="2026-01-30 16:41:14.720342979 +0000 UTC m=+1129.358300315" observedRunningTime="2026-01-30 16:41:15.256350371 +0000 UTC m=+1129.894307727" watchObservedRunningTime="2026-01-30 16:41:15.258498851 +0000 UTC m=+1129.896456207" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.296742 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.882150282 podStartE2EDuration="24.296717943s" podCreationTimestamp="2026-01-30 16:40:51 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.288736128 +0000 UTC m=+1122.926693474" lastFinishedPulling="2026-01-30 16:41:14.703303779 +0000 UTC m=+1129.341261135" observedRunningTime="2026-01-30 16:41:15.282956894 +0000 UTC m=+1129.920914260" watchObservedRunningTime="2026-01-30 16:41:15.296717943 +0000 UTC m=+1129.934675289" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.343091 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.375693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.433278 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.435025 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.450246 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475785 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.475928 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579164 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.579243 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.580230 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.581042 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.624265 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"dnsmasq-dns-7cb5889db5-nw485\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.792611 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.968112 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.992416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.992995 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.993127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.993887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config" (OuterVolumeSpecName: "config") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.994051 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") pod \"7c2933e1-c67d-45a6-8e08-fac512f6473b\" (UID: \"7c2933e1-c67d-45a6-8e08-fac512f6473b\") " Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.995123 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:15 crc kubenswrapper[4766]: I0130 16:41:15.995350 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2933e1-c67d-45a6-8e08-fac512f6473b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.002303 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd" (OuterVolumeSpecName: "kube-api-access-jvdkd") pod "7c2933e1-c67d-45a6-8e08-fac512f6473b" (UID: "7c2933e1-c67d-45a6-8e08-fac512f6473b"). InnerVolumeSpecName "kube-api-access-jvdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.097323 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdkd\" (UniqueName: \"kubernetes.io/projected/7c2933e1-c67d-45a6-8e08-fac512f6473b-kube-api-access-jvdkd\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.211680 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.252865 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.252887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rvfhb" event={"ID":"7c2933e1-c67d-45a6-8e08-fac512f6473b","Type":"ContainerDied","Data":"a4daa3864bea92099d39184deffbe2394e36c62e86137adfe5c0a64228217582"} Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.258007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerStarted","Data":"83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9"} Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.258443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.262164 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.321928 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.333135 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rvfhb"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.341112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:16 crc kubenswrapper[4766]: W0130 16:41:16.341333 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8114a4cb_b868_4813_836e_6e12b1b37c00.slice/crio-6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15 WatchSource:0}: Error finding container 6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15: Status 404 returned error can't find the container with id 6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15 Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.342528 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l6hkn" podStartSLOduration=24.150965957 podStartE2EDuration="28.342504985s" podCreationTimestamp="2026-01-30 16:40:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:08.556389689 +0000 UTC m=+1123.194347035" lastFinishedPulling="2026-01-30 16:41:12.747928717 +0000 UTC m=+1127.385886063" observedRunningTime="2026-01-30 16:41:16.329395364 +0000 UTC m=+1130.967352740" watchObservedRunningTime="2026-01-30 16:41:16.342504985 +0000 UTC m=+1130.980462341" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.474592 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.515480 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.619890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.635303 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638272 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638577 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r75sb" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.638765 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.639583 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.643697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811715 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811786 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.811958 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.812009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.862515 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.863893 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.865675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.866337 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.868338 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.874716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913525 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913637 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.913737 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913841 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913871 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: E0130 16:41:16.913928 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:17.413906731 +0000 UTC m=+1132.051864067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914076 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914145 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.914259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.920378 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.936229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:16 crc kubenswrapper[4766]: I0130 16:41:16.945433 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015537 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.015701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117577 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117672 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117812 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117839 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.117865 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118067 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.118825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.121344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.121483 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.125036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.140885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"swift-ring-rebalance-n8rf4\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.181907 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerStarted","Data":"6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15"} Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265764 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265813 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.265829 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423391 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423611 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: E0130 16:41:17.423674 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:18.42365335 +0000 UTC m=+1133.061610696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.423390 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:17 crc kubenswrapper[4766]: I0130 16:41:17.641660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.059066 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2933e1-c67d-45a6-8e08-fac512f6473b" path="/var/lib/kubelet/pods/7c2933e1-c67d-45a6-8e08-fac512f6473b/volumes" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.273591 4766 generic.go:334] "Generic (PLEG): container finished" podID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerID="47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f" exitCode=0 Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.273656 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f"} Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.281461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerStarted","Data":"0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204"} Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.336854 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.340208 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.445883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446119 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446152 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: E0130 16:41:18.446372 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:20.446353157 +0000 UTC m=+1135.084310503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.589257 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.597047 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.599822 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.614446 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.615585 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650000 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650049 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.650194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.752650 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753378 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.753996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.754249 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.797397 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"dnsmasq-dns-6c89d5d749-qwlk6\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.807008 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.808380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.819472 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.823961 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.855957 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.856363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.934053 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.936709 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958147 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958307 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958489 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.958645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.959523 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.978226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.981226 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:18 crc kubenswrapper[4766]: I0130 16:41:18.995152 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.004855 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.007673 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016224 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016277 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016489 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.016663 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-njt4v" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.027931 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.047838 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.059975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060053 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060117 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060380 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060410 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060435 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.060456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.102526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.118077 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.118545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rsxl2\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161515 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161615 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161736 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161769 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.161840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.162962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.163264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.163940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.164768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.164911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.165248 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.166857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.177735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.183965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.193885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.194147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"dnsmasq-dns-698758b865-rghwg\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.194442 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.341649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" event={"ID":"a31c7217-d6d2-4cc1-ab83-016373333c80","Type":"ContainerDied","Data":"c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5"} Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.341721 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a6f86e26a3a7d3d41158b0d1740e813c2a23a97f1db4e06c00cc07c4e615e5" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.347156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.358667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerStarted","Data":"88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d"} Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.365540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.384908 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.387435 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" podStartSLOduration=3.635170537 podStartE2EDuration="4.387416113s" podCreationTimestamp="2026-01-30 16:41:15 +0000 UTC" firstStartedPulling="2026-01-30 16:41:16.343808191 +0000 UTC m=+1130.981765537" lastFinishedPulling="2026-01-30 16:41:17.096053767 +0000 UTC m=+1131.734011113" observedRunningTime="2026-01-30 16:41:19.384864794 +0000 UTC m=+1134.022822140" watchObservedRunningTime="2026-01-30 16:41:19.387416113 +0000 UTC m=+1134.025373459" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.388831 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578633 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.578726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") pod \"a31c7217-d6d2-4cc1-ab83-016373333c80\" (UID: \"a31c7217-d6d2-4cc1-ab83-016373333c80\") " Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.580225 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.580907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config" (OuterVolumeSpecName: "config") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.585024 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.585054 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7217-d6d2-4cc1-ab83-016373333c80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.627964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd" (OuterVolumeSpecName: "kube-api-access-dsqpd") pod "a31c7217-d6d2-4cc1-ab83-016373333c80" (UID: "a31c7217-d6d2-4cc1-ab83-016373333c80"). InnerVolumeSpecName "kube-api-access-dsqpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.628050 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:19 crc kubenswrapper[4766]: W0130 16:41:19.651972 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd754d8cb_87c5_4ca2_a9d2_e3aef7548f2d.slice/crio-8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465 WatchSource:0}: Error finding container 8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465: Status 404 returned error can't find the container with id 8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465 Jan 30 16:41:19 crc kubenswrapper[4766]: I0130 16:41:19.687714 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsqpd\" (UniqueName: \"kubernetes.io/projected/a31c7217-d6d2-4cc1-ab83-016373333c80-kube-api-access-dsqpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.107401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:41:20 crc kubenswrapper[4766]: W0130 16:41:20.111388 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod140fa04a_cb22_40ed_a08c_17f4ea13a5c4.slice/crio-c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426 WatchSource:0}: Error finding container c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426: Status 404 returned error can't find the container with id c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.115524 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:41:20 crc kubenswrapper[4766]: W0130 16:41:20.117663 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4db25e7_718f_4a48_8dd2_2db2ae9e804c.slice/crio-9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89 WatchSource:0}: Error finding container 9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89: Status 404 returned error can't find the container with id 9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.192016 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.373933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.375473 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"090eddff40a00fe6ea2b9a4d39ef4e8496a69421f9440b673916d296607e29b3"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.379033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerStarted","Data":"c5dccc2b2d4eb0624084e2830a6c6a2e7d81c9d945cf1979593549236acac426"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.380252 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.381896 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6647p" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.381912 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5"} Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.382404 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.382591 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" containerID="cri-o://88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" gracePeriod=10 Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.428997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.434602 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6647p"] Jan 30 16:41:20 crc kubenswrapper[4766]: I0130 16:41:20.511069 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511393 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511447 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:20 crc kubenswrapper[4766]: E0130 16:41:20.511508 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:24.511488332 +0000 UTC m=+1139.149445678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.393595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.395857 4766 generic.go:334] "Generic (PLEG): container finished" podID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerID="88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" exitCode=0 Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.395919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.397414 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerStarted","Data":"ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.398911 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} Jan 30 16:41:21 crc kubenswrapper[4766]: I0130 16:41:21.401606 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.048588 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31c7217-d6d2-4cc1-ab83-016373333c80" path="/var/lib/kubelet/pods/a31c7217-d6d2-4cc1-ab83-016373333c80/volumes" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.413946 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerID="e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171" exitCode=0 Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.414024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.417300 4766 generic.go:334] "Generic (PLEG): container finished" podID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" exitCode=0 Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.417645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.497118 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rsxl2" podStartSLOduration=4.497097956 podStartE2EDuration="4.497097956s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:22.495078681 +0000 UTC m=+1137.133036047" watchObservedRunningTime="2026-01-30 16:41:22.497097956 +0000 UTC m=+1137.135055322" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.977163 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:22 crc kubenswrapper[4766]: I0130 16:41:22.977594 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.055555 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.429315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerStarted","Data":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.429705 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.441676 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerStarted","Data":"d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7"} Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.448691 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" podStartSLOduration=5.448654913 podStartE2EDuration="5.448654913s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:23.446474893 +0000 UTC m=+1138.084432239" watchObservedRunningTime="2026-01-30 16:41:23.448654913 +0000 UTC m=+1138.086612259" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.489165 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rghwg" podStartSLOduration=5.489144529 podStartE2EDuration="5.489144529s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:23.484349346 +0000 UTC m=+1138.122306712" watchObservedRunningTime="2026-01-30 16:41:23.489144529 +0000 UTC m=+1138.127101875" Jan 30 16:41:23 crc kubenswrapper[4766]: I0130 16:41:23.531594 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.385491 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.385558 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nw485" event={"ID":"8114a4cb-b868-4813-836e-6e12b1b37c00","Type":"ContainerDied","Data":"6a26427f3e3e29a1dbf4dfdb6a3c3ecc4231decf4e2de22b82360cb9a413fd15"} Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.454885 4766 scope.go:117] "RemoveContainer" containerID="88fb18e01e6b586a98e304f5d04726ff189a547e7fe84ce42c179ad7614d8d6d" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494578 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494799 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.494838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") pod \"8114a4cb-b868-4813-836e-6e12b1b37c00\" (UID: \"8114a4cb-b868-4813-836e-6e12b1b37c00\") " Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.502675 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl" (OuterVolumeSpecName: "kube-api-access-g88kl") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "kube-api-access-g88kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.541973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config" (OuterVolumeSpecName: "config") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.550964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8114a4cb-b868-4813-836e-6e12b1b37c00" (UID: "8114a4cb-b868-4813-836e-6e12b1b37c00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.597874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598355 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598681 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8114a4cb-b868-4813-836e-6e12b1b37c00-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.598695 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g88kl\" (UniqueName: \"kubernetes.io/projected/8114a4cb-b868-4813-836e-6e12b1b37c00-kube-api-access-g88kl\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598712 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598737 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: E0130 16:41:24.598782 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:32.598763578 +0000 UTC m=+1147.236720924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.795937 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:24 crc kubenswrapper[4766]: I0130 16:41:24.803507 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nw485"] Jan 30 16:41:25 crc kubenswrapper[4766]: I0130 16:41:25.545863 4766 scope.go:117] "RemoveContainer" containerID="47fbf09005959840cf9c0719b304d36f50890aa9f40b3160e0f527a56e67579f" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.060690 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" path="/var/lib/kubelet/pods/8114a4cb-b868-4813-836e-6e12b1b37c00/volumes" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.486351 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.487235 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerStarted","Data":"1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.492832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerStarted","Data":"d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.494492 4766 generic.go:334] "Generic (PLEG): container finished" podID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" exitCode=0 Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.494530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.510393 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.796306466 podStartE2EDuration="8.510347214s" podCreationTimestamp="2026-01-30 16:41:18 +0000 UTC" firstStartedPulling="2026-01-30 16:41:20.19734701 +0000 UTC m=+1134.835304346" lastFinishedPulling="2026-01-30 16:41:25.911387748 +0000 UTC m=+1140.549345094" observedRunningTime="2026-01-30 16:41:26.505673595 +0000 UTC m=+1141.143630961" watchObservedRunningTime="2026-01-30 16:41:26.510347214 +0000 UTC m=+1141.148304590" Jan 30 16:41:26 crc kubenswrapper[4766]: I0130 16:41:26.529251 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n8rf4" podStartSLOduration=2.346672951 podStartE2EDuration="10.529217974s" podCreationTimestamp="2026-01-30 16:41:16 +0000 UTC" firstStartedPulling="2026-01-30 16:41:17.645191032 +0000 UTC m=+1132.283148378" lastFinishedPulling="2026-01-30 16:41:25.827736055 +0000 UTC m=+1140.465693401" observedRunningTime="2026-01-30 16:41:26.526951302 +0000 UTC m=+1141.164908648" watchObservedRunningTime="2026-01-30 16:41:26.529217974 +0000 UTC m=+1141.167175320" Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.505057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerStarted","Data":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.505512 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 16:41:27 crc kubenswrapper[4766]: I0130 16:41:27.531169 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371989.323624 podStartE2EDuration="47.531152728s" podCreationTimestamp="2026-01-30 16:40:40 +0000 UTC" firstStartedPulling="2026-01-30 16:40:42.514465045 +0000 UTC m=+1097.152422381" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:27.527424905 +0000 UTC m=+1142.165382251" watchObservedRunningTime="2026-01-30 16:41:27.531152728 +0000 UTC m=+1142.169110074" Jan 30 16:41:28 crc kubenswrapper[4766]: I0130 16:41:28.936374 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.387432 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.439077 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:29 crc kubenswrapper[4766]: I0130 16:41:29.520944 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" containerID="cri-o://e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" gracePeriod=10 Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.432763 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507595 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.507867 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") pod \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\" (UID: \"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d\") " Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.514020 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2" (OuterVolumeSpecName: "kube-api-access-frqn2") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "kube-api-access-frqn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529468 4766 generic.go:334] "Generic (PLEG): container finished" podID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" exitCode=0 Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529536 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" event={"ID":"d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d","Type":"ContainerDied","Data":"8d9a46eb1216a3100209bf80420923758d0ab5dbd1dd0ccea57c594994eba465"} Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529583 4766 scope.go:117] "RemoveContainer" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.529725 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-qwlk6" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.545566 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.553445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config" (OuterVolumeSpecName: "config") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.554120 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" (UID: "d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.601344 4766 scope.go:117] "RemoveContainer" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610212 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqn2\" (UniqueName: \"kubernetes.io/projected/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-kube-api-access-frqn2\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610245 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610255 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.610263 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.621635 4766 scope.go:117] "RemoveContainer" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: E0130 16:41:30.622114 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": container with ID starting with e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83 not found: ID does not exist" containerID="e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622150 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83"} err="failed to get container status \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": rpc error: code = NotFound desc = could not find container \"e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83\": container with ID starting with e7daa579f70bf6a3acb178e9ec1ed8c1fc385b21c22bdf54aaf089b63285df83 not found: ID does not exist" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622172 4766 scope.go:117] "RemoveContainer" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: E0130 16:41:30.622545 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": container with ID starting with 9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd not found: ID does not exist" containerID="9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.622579 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd"} err="failed to get container status \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": rpc error: code = NotFound desc = could not find container \"9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd\": container with ID starting with 9fb0691e704a0ac758a7232bb2f768608170d0a691b085e55b2a3e90678520dd not found: ID does not exist" Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.862694 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:30 crc kubenswrapper[4766]: I0130 16:41:30.869665 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-qwlk6"] Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.479526 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.479976 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668035 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668462 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668495 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668502 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668523 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668530 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: E0130 16:41:31.668549 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="init" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668726 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.668745 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114a4cb-b868-4813-836e-6e12b1b37c00" containerName="dnsmasq-dns" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.669378 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.672141 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.678351 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.730754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.730878 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.832079 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.832306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.833036 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.864642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"root-account-create-update-wht5r\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:31 crc kubenswrapper[4766]: I0130 16:41:31.989261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.049679 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d" path="/var/lib/kubelet/pods/d754d8cb-87c5-4ca2-a9d2-e3aef7548f2d/volumes" Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.467021 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.556273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerStarted","Data":"aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97"} Jan 30 16:41:32 crc kubenswrapper[4766]: I0130 16:41:32.651803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652136 4766 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652374 4766 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:41:32 crc kubenswrapper[4766]: E0130 16:41:32.652440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift podName:8b182790-0761-450c-85d1-63ddd59ac10f nodeName:}" failed. No retries permitted until 2026-01-30 16:41:48.652418582 +0000 UTC m=+1163.290375928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift") pod "swift-storage-0" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f") : configmap "swift-ring-files" not found Jan 30 16:41:33 crc kubenswrapper[4766]: I0130 16:41:33.567422 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerStarted","Data":"29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8"} Jan 30 16:41:33 crc kubenswrapper[4766]: I0130 16:41:33.585520 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wht5r" podStartSLOduration=2.58549986 podStartE2EDuration="2.58549986s" podCreationTimestamp="2026-01-30 16:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:33.581216071 +0000 UTC m=+1148.219173427" watchObservedRunningTime="2026-01-30 16:41:33.58549986 +0000 UTC m=+1148.223457206" Jan 30 16:41:35 crc kubenswrapper[4766]: I0130 16:41:35.586646 4766 generic.go:334] "Generic (PLEG): container finished" podID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerID="d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1" exitCode=0 Jan 30 16:41:35 crc kubenswrapper[4766]: I0130 16:41:35.586739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerDied","Data":"d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1"} Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.596744 4766 generic.go:334] "Generic (PLEG): container finished" podID="93fa2128-fb98-4cca-9067-a864a6207188" containerID="29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8" exitCode=0 Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.596827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerDied","Data":"29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8"} Jan 30 16:41:36 crc kubenswrapper[4766]: I0130 16:41:36.924075 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.030501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.108507 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" probeResult="failure" output=< Jan 30 16:41:37 crc kubenswrapper[4766]: wsrep_local_state_comment (Joined) differs from Synced Jan 30 16:41:37 crc kubenswrapper[4766]: > Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.114879 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") pod \"6da00370-0819-4857-8fa3-1ffe3e6b628b\" (UID: \"6da00370-0819-4857-8fa3-1ffe3e6b628b\") " Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.115144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.116154 4766 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.116321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.121796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v" (OuterVolumeSpecName: "kube-api-access-8hx9v") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "kube-api-access-8hx9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.126717 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.142153 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts" (OuterVolumeSpecName: "scripts") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.144012 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.146809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6da00370-0819-4857-8fa3-1ffe3e6b628b" (UID: "6da00370-0819-4857-8fa3-1ffe3e6b628b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.217815 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218135 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da00370-0819-4857-8fa3-1ffe3e6b628b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218233 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hx9v\" (UniqueName: \"kubernetes.io/projected/6da00370-0819-4857-8fa3-1ffe3e6b628b-kube-api-access-8hx9v\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218330 4766 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218435 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6da00370-0819-4857-8fa3-1ffe3e6b628b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.218672 4766 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6da00370-0819-4857-8fa3-1ffe3e6b628b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.608996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8rf4" event={"ID":"6da00370-0819-4857-8fa3-1ffe3e6b628b","Type":"ContainerDied","Data":"0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204"} Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.609404 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2a1beef2986dc171385e28859599afa82cdfc8eed7aa1c2a744690930b7204" Jan 30 16:41:37 crc kubenswrapper[4766]: I0130 16:41:37.609024 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8rf4" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.018942 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.133848 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") pod \"93fa2128-fb98-4cca-9067-a864a6207188\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.133890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") pod \"93fa2128-fb98-4cca-9067-a864a6207188\" (UID: \"93fa2128-fb98-4cca-9067-a864a6207188\") " Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.134683 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93fa2128-fb98-4cca-9067-a864a6207188" (UID: "93fa2128-fb98-4cca-9067-a864a6207188"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.138934 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt" (OuterVolumeSpecName: "kube-api-access-wswmt") pod "93fa2128-fb98-4cca-9067-a864a6207188" (UID: "93fa2128-fb98-4cca-9067-a864a6207188"). InnerVolumeSpecName "kube-api-access-wswmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.236047 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93fa2128-fb98-4cca-9067-a864a6207188-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.236084 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswmt\" (UniqueName: \"kubernetes.io/projected/93fa2128-fb98-4cca-9067-a864a6207188-kube-api-access-wswmt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wht5r" event={"ID":"93fa2128-fb98-4cca-9067-a864a6207188","Type":"ContainerDied","Data":"aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97"} Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626624 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea6ed23d3ef964fc62d7cf8523fae82358a8f95c83877ca02c400c33f672f97" Jan 30 16:41:38 crc kubenswrapper[4766]: I0130 16:41:38.626716 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wht5r" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.045699 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046039 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046087 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046754 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.046811 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" gracePeriod=600 Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.428875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.638282 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" exitCode=0 Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.638414 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0"} Jan 30 16:41:39 crc kubenswrapper[4766]: I0130 16:41:39.639074 4766 scope.go:117] "RemoveContainer" containerID="5e25fe15fa17987c12e4d9db1a1dd14967f9d491c11f7c6086924c59f51346cf" Jan 30 16:41:40 crc kubenswrapper[4766]: I0130 16:41:40.649405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.553285 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.659090 4766 generic.go:334] "Generic (PLEG): container finished" podID="b21357e1-82c9-419a-a191-359c84d6d001" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" exitCode=0 Jan 30 16:41:41 crc kubenswrapper[4766]: I0130 16:41:41.659438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.785112 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:42 crc kubenswrapper[4766]: E0130 16:41:42.786145 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786164 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: E0130 16:41:42.786226 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786236 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786437 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" containerName="swift-ring-rebalance" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.786454 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fa2128-fb98-4cca-9067-a864a6207188" containerName="mariadb-account-create-update" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.787056 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.794666 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.876324 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.877744 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.881888 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.886953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.917011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:42 crc kubenswrapper[4766]: I0130 16:41:42.917063 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018529 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018585 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018627 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.018648 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.019337 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.039876 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"keystone-db-create-qdgxb\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.091335 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.092845 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.104134 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.104865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.120249 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.120316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.122075 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.182004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"keystone-e3be-account-create-update-n7qg6\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.199379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.210485 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.211927 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.214356 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.220942 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.221693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.221804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.325336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.326455 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.327339 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.345921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"placement-db-create-nwrgq\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.410382 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.431349 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.431430 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.432065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.466697 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.467965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"placement-cc14-account-create-update-jhjn2\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.468220 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.483780 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.532977 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.533405 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.571960 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.573277 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.575855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.595132 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.606751 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.635392 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.635519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.636334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.665085 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"glance-db-create-2h7p2\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.677983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerStarted","Data":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.678290 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.711995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.172516344 podStartE2EDuration="1m5.71197083s" podCreationTimestamp="2026-01-30 16:40:38 +0000 UTC" firstStartedPulling="2026-01-30 16:40:40.622673684 +0000 UTC m=+1095.260631030" lastFinishedPulling="2026-01-30 16:41:07.16212817 +0000 UTC m=+1121.800085516" observedRunningTime="2026-01-30 16:41:43.703832036 +0000 UTC m=+1158.341789382" watchObservedRunningTime="2026-01-30 16:41:43.71197083 +0000 UTC m=+1158.349928176" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.716850 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:41:43 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:41:43 crc kubenswrapper[4766]: > Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.736903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.736981 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.761000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:41:43 crc kubenswrapper[4766]: W0130 16:41:43.769695 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dbf5802_dfa7_4b32_aaa5_48fc779da5d6.slice/crio-41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8 WatchSource:0}: Error finding container 41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8: Status 404 returned error can't find the container with id 41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8 Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.798540 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.838342 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.838986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.839131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.855120 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.862982 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"glance-63c5-account-create-update-sx7bq\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.900204 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:43 crc kubenswrapper[4766]: I0130 16:41:43.972756 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.166572 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:41:44 crc kubenswrapper[4766]: W0130 16:41:44.175721 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75830eb2_571a_4fef_92b5_057b0928cfe0.slice/crio-7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727 WatchSource:0}: Error finding container 7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727: Status 404 returned error can't find the container with id 7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.264494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:41:44 crc kubenswrapper[4766]: W0130 16:41:44.292789 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c8af029_8432_4152_8e74_5c40d72636d7.slice/crio-e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6 WatchSource:0}: Error finding container e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6: Status 404 returned error can't find the container with id e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.362762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689393 4766 generic.go:334] "Generic (PLEG): container finished" podID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerID="cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395" exitCode=0 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerDied","Data":"cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.689513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerStarted","Data":"07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691717 4766 generic.go:334] "Generic (PLEG): container finished" podID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerID="7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d" exitCode=0 Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerDied","Data":"7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.691846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerStarted","Data":"41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.693482 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerStarted","Data":"996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.693515 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerStarted","Data":"e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.699886 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerStarted","Data":"2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.699925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerStarted","Data":"7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.705889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerStarted","Data":"5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.705942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerStarted","Data":"eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.708244 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerStarted","Data":"e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.708293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerStarted","Data":"3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11"} Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.750152 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e3be-account-create-update-n7qg6" podStartSLOduration=2.750125002 podStartE2EDuration="2.750125002s" podCreationTimestamp="2026-01-30 16:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.732761514 +0000 UTC m=+1159.370718860" watchObservedRunningTime="2026-01-30 16:41:44.750125002 +0000 UTC m=+1159.388082338" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.753412 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cc14-account-create-update-jhjn2" podStartSLOduration=1.753392221 podStartE2EDuration="1.753392221s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.748560029 +0000 UTC m=+1159.386517395" watchObservedRunningTime="2026-01-30 16:41:44.753392221 +0000 UTC m=+1159.391349567" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.776815 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2h7p2" podStartSLOduration=1.7767925070000001 podStartE2EDuration="1.776792507s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.769835094 +0000 UTC m=+1159.407792450" watchObservedRunningTime="2026-01-30 16:41:44.776792507 +0000 UTC m=+1159.414749853" Jan 30 16:41:44 crc kubenswrapper[4766]: I0130 16:41:44.796833 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-63c5-account-create-update-sx7bq" podStartSLOduration=1.796810558 podStartE2EDuration="1.796810558s" podCreationTimestamp="2026-01-30 16:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:44.794534365 +0000 UTC m=+1159.432491711" watchObservedRunningTime="2026-01-30 16:41:44.796810558 +0000 UTC m=+1159.434767904" Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.718265 4766 generic.go:334] "Generic (PLEG): container finished" podID="4c8af029-8432-4152-8e74-5c40d72636d7" containerID="996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.718669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerDied","Data":"996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.722940 4766 generic.go:334] "Generic (PLEG): container finished" podID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerID="2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.723247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerDied","Data":"2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.725622 4766 generic.go:334] "Generic (PLEG): container finished" podID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerID="5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.725714 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerDied","Data":"5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb"} Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.728211 4766 generic.go:334] "Generic (PLEG): container finished" podID="acb52775-c639-4afc-9f21-f33531a854b3" containerID="e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e" exitCode=0 Jan 30 16:41:45 crc kubenswrapper[4766]: I0130 16:41:45.728292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerDied","Data":"e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.233699 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.243720 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.285615 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") pod \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") pod \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286392 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") pod \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\" (UID: \"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.286611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") pod \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\" (UID: \"12ab95d5-fb83-42b1-a38b-9e3bb8916f37\") " Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.287054 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" (UID: "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.287574 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.288073 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12ab95d5-fb83-42b1-a38b-9e3bb8916f37" (UID: "12ab95d5-fb83-42b1-a38b-9e3bb8916f37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.296464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs" (OuterVolumeSpecName: "kube-api-access-rzqqs") pod "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" (UID: "0dbf5802-dfa7-4b32-aaa5-48fc779da5d6"). InnerVolumeSpecName "kube-api-access-rzqqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.299765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt" (OuterVolumeSpecName: "kube-api-access-52pnt") pod "12ab95d5-fb83-42b1-a38b-9e3bb8916f37" (UID: "12ab95d5-fb83-42b1-a38b-9e3bb8916f37"). InnerVolumeSpecName "kube-api-access-52pnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.388989 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqqs\" (UniqueName: \"kubernetes.io/projected/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6-kube-api-access-rzqqs\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.389317 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.389409 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52pnt\" (UniqueName: \"kubernetes.io/projected/12ab95d5-fb83-42b1-a38b-9e3bb8916f37-kube-api-access-52pnt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qdgxb" event={"ID":"0dbf5802-dfa7-4b32-aaa5-48fc779da5d6","Type":"ContainerDied","Data":"41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737191 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f06b2c8257561a12ef65c8eeb76663e26a06286d15e19640ea4f589207a7a8" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.737221 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qdgxb" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739749 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwrgq" event={"ID":"12ab95d5-fb83-42b1-a38b-9e3bb8916f37","Type":"ContainerDied","Data":"07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740"} Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739791 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07460daf7562cfe849a1b0747825aad95ff813f31aa3daee3420d79a511b7740" Jan 30 16:41:46 crc kubenswrapper[4766]: I0130 16:41:46.739940 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwrgq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.116860 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.216584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") pod \"75830eb2-571a-4fef-92b5-057b0928cfe0\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.216748 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") pod \"75830eb2-571a-4fef-92b5-057b0928cfe0\" (UID: \"75830eb2-571a-4fef-92b5-057b0928cfe0\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.217510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75830eb2-571a-4fef-92b5-057b0928cfe0" (UID: "75830eb2-571a-4fef-92b5-057b0928cfe0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.219981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4" (OuterVolumeSpecName: "kube-api-access-snrr4") pod "75830eb2-571a-4fef-92b5-057b0928cfe0" (UID: "75830eb2-571a-4fef-92b5-057b0928cfe0"). InnerVolumeSpecName "kube-api-access-snrr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.288971 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.294403 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.301635 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.321012 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75830eb2-571a-4fef-92b5-057b0928cfe0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.321062 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrr4\" (UniqueName: \"kubernetes.io/projected/75830eb2-571a-4fef-92b5-057b0928cfe0-kube-api-access-snrr4\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422749 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") pod \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") pod \"acb52775-c639-4afc-9f21-f33531a854b3\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") pod \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\" (UID: \"3fb40e54-43ed-4dd6-8c23-138c01cf062d\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.422946 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") pod \"acb52775-c639-4afc-9f21-f33531a854b3\" (UID: \"acb52775-c639-4afc-9f21-f33531a854b3\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") pod \"4c8af029-8432-4152-8e74-5c40d72636d7\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") pod \"4c8af029-8432-4152-8e74-5c40d72636d7\" (UID: \"4c8af029-8432-4152-8e74-5c40d72636d7\") " Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.423916 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fb40e54-43ed-4dd6-8c23-138c01cf062d" (UID: "3fb40e54-43ed-4dd6-8c23-138c01cf062d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.424147 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c8af029-8432-4152-8e74-5c40d72636d7" (UID: "4c8af029-8432-4152-8e74-5c40d72636d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.424159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb52775-c639-4afc-9f21-f33531a854b3" (UID: "acb52775-c639-4afc-9f21-f33531a854b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.427211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w" (OuterVolumeSpecName: "kube-api-access-h546w") pod "acb52775-c639-4afc-9f21-f33531a854b3" (UID: "acb52775-c639-4afc-9f21-f33531a854b3"). InnerVolumeSpecName "kube-api-access-h546w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.427955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v" (OuterVolumeSpecName: "kube-api-access-2t76v") pod "3fb40e54-43ed-4dd6-8c23-138c01cf062d" (UID: "3fb40e54-43ed-4dd6-8c23-138c01cf062d"). InnerVolumeSpecName "kube-api-access-2t76v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.432442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt" (OuterVolumeSpecName: "kube-api-access-2crjt") pod "4c8af029-8432-4152-8e74-5c40d72636d7" (UID: "4c8af029-8432-4152-8e74-5c40d72636d7"). InnerVolumeSpecName "kube-api-access-2crjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t76v\" (UniqueName: \"kubernetes.io/projected/3fb40e54-43ed-4dd6-8c23-138c01cf062d-kube-api-access-2t76v\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525877 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb52775-c639-4afc-9f21-f33531a854b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.525949 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2crjt\" (UniqueName: \"kubernetes.io/projected/4c8af029-8432-4152-8e74-5c40d72636d7-kube-api-access-2crjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526004 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c8af029-8432-4152-8e74-5c40d72636d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526058 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fb40e54-43ed-4dd6-8c23-138c01cf062d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.526113 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h546w\" (UniqueName: \"kubernetes.io/projected/acb52775-c639-4afc-9f21-f33531a854b3-kube-api-access-h546w\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.748998 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2h7p2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.748981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2h7p2" event={"ID":"acb52775-c639-4afc-9f21-f33531a854b3","Type":"ContainerDied","Data":"3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.749505 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edcccb4e6bd12f5d5a1f632835e7d89f180139beb543e68cd250b88bec9ea11" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750779 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-63c5-account-create-update-sx7bq" event={"ID":"4c8af029-8432-4152-8e74-5c40d72636d7","Type":"ContainerDied","Data":"e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750853 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6544f73b15af94bb629621458b494b84335847e60182c1dd01da97465e4bba6" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.750804 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-63c5-account-create-update-sx7bq" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-jhjn2" event={"ID":"75830eb2-571a-4fef-92b5-057b0928cfe0","Type":"ContainerDied","Data":"7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753242 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe75bd4773b57c5c426984b6630208dad0241c55bc83cca2c368bb40dd1f727" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.753289 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-jhjn2" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754784 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e3be-account-create-update-n7qg6" event={"ID":"3fb40e54-43ed-4dd6-8c23-138c01cf062d","Type":"ContainerDied","Data":"eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f"} Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754817 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb72ccaae70630331c7e646c1870cd1adfd31441bf6b569c32cec7aa4da058f" Jan 30 16:41:47 crc kubenswrapper[4766]: I0130 16:41:47.754874 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-n7qg6" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.701910 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:41:48 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:41:48 crc kubenswrapper[4766]: > Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.746535 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.754303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"swift-storage-0\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.764386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.824159 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.837720 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.885572 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.885990 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886334 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886364 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886372 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886391 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886418 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886425 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886436 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886443 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: E0130 16:41:48.886454 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886460 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886624 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886636 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886649 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886691 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb52775-c639-4afc-9f21-f33531a854b3" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886702 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" containerName="mariadb-database-create" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.886712 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" containerName="mariadb-account-create-update" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.887388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.891467 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6xjc8" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.891674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.919695 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.952776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.952999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.953039 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:48 crc kubenswrapper[4766]: I0130 16:41:48.953276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.056605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.062969 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.066503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.066498 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.081226 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"glance-db-sync-jpmx7\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.092239 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.093726 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.095984 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.114369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161317 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161357 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.161641 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.229710 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267225 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267555 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.267586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.268166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.270515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.287171 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"ovn-controller-clmnh-config-w69zf\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.476654 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.535209 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.775614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"e2895452d8c205fa0d4dc996a2287e6197931bc707b2d07e3c6da2c761ed67e2"} Jan 30 16:41:49 crc kubenswrapper[4766]: I0130 16:41:49.811286 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.016645 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.128316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.151168 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wht5r"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.212236 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.213363 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.215542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.226591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.295066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.295124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.396356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.396714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.397829 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.428652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"root-account-create-update-jppr8\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.535741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.785698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerStarted","Data":"156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59"} Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788296 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerID="5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392" exitCode=0 Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788347 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerDied","Data":"5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392"} Jan 30 16:41:50 crc kubenswrapper[4766]: I0130 16:41:50.788373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerStarted","Data":"00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.171611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:41:51 crc kubenswrapper[4766]: W0130 16:41:51.186421 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9dd82ac_e512_442e_97c4_53be730affca.slice/crio-b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c WatchSource:0}: Error finding container b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c: Status 404 returned error can't find the container with id b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.806305 4766 generic.go:334] "Generic (PLEG): container finished" podID="e9dd82ac-e512-442e-97c4-53be730affca" containerID="10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60" exitCode=0 Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.807128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerDied","Data":"10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.807154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerStarted","Data":"b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8"} Jan 30 16:41:51 crc kubenswrapper[4766]: I0130 16:41:51.817811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.053487 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fa2128-fb98-4cca-9067-a864a6207188" path="/var/lib/kubelet/pods/93fa2128-fb98-4cca-9067-a864a6207188/volumes" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.204334 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342420 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342646 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342806 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") pod \"cbb373eb-bd59-4480-80b6-bd1b2427105b\" (UID: \"cbb373eb-bd59-4480-80b6-bd1b2427105b\") " Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342767 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run" (OuterVolumeSpecName: "var-run") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342848 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.342880 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343218 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343243 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343258 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cbb373eb-bd59-4480-80b6-bd1b2427105b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.343768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.344146 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts" (OuterVolumeSpecName: "scripts") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.364465 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2" (OuterVolumeSpecName: "kube-api-access-s7bk2") pod "cbb373eb-bd59-4480-80b6-bd1b2427105b" (UID: "cbb373eb-bd59-4480-80b6-bd1b2427105b"). InnerVolumeSpecName "kube-api-access-s7bk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445246 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7bk2\" (UniqueName: \"kubernetes.io/projected/cbb373eb-bd59-4480-80b6-bd1b2427105b-kube-api-access-s7bk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445288 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.445303 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbb373eb-bd59-4480-80b6-bd1b2427105b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.829589 4766 generic.go:334] "Generic (PLEG): container finished" podID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerID="420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5" exitCode=0 Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.829673 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.835966 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-w69zf" event={"ID":"cbb373eb-bd59-4480-80b6-bd1b2427105b","Type":"ContainerDied","Data":"00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f"} Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837983 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c49953103bedb879a4e1810914f639a631e3e34626d1a29d97454bb88f0c1f" Jan 30 16:41:52 crc kubenswrapper[4766]: I0130 16:41:52.837997 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-w69zf" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.306264 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.316158 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh-config-w69zf"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.432495 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:53 crc kubenswrapper[4766]: E0130 16:41:53.432880 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.432901 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.436798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" containerName="ovn-config" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.437546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.439164 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.447017 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.450128 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.564820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") pod \"e9dd82ac-e512-442e-97c4-53be730affca\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.564963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") pod \"e9dd82ac-e512-442e-97c4-53be730affca\" (UID: \"e9dd82ac-e512-442e-97c4-53be730affca\") " Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565367 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.565702 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9dd82ac-e512-442e-97c4-53be730affca" (UID: "e9dd82ac-e512-442e-97c4-53be730affca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.570902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj" (OuterVolumeSpecName: "kube-api-access-52cwj") pod "e9dd82ac-e512-442e-97c4-53be730affca" (UID: "e9dd82ac-e512-442e-97c4-53be730affca"). InnerVolumeSpecName "kube-api-access-52cwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667157 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667264 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667287 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667340 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667383 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667441 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52cwj\" (UniqueName: \"kubernetes.io/projected/e9dd82ac-e512-442e-97c4-53be730affca-kube-api-access-52cwj\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667453 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd82ac-e512-442e-97c4-53be730affca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667567 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667570 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.667619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.668474 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.669695 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.698069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"ovn-controller-clmnh-config-zx269\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.711075 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-clmnh" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.806248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jppr8" event={"ID":"e9dd82ac-e512-442e-97c4-53be730affca","Type":"ContainerDied","Data":"b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850915 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5aead66f7c29087a054f0580ba8f6d13d6016e59ac8cc33f7178b88f8ae405c" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.850877 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jppr8" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.861278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerStarted","Data":"40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.861572 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.867437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2"} Jan 30 16:41:53 crc kubenswrapper[4766]: I0130 16:41:53.896871 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371960.957924 podStartE2EDuration="1m15.896852127s" podCreationTimestamp="2026-01-30 16:40:38 +0000 UTC" firstStartedPulling="2026-01-30 16:40:40.695273353 +0000 UTC m=+1095.333230699" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:53.892633835 +0000 UTC m=+1168.530591211" watchObservedRunningTime="2026-01-30 16:41:53.896852127 +0000 UTC m=+1168.534809473" Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.054717 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb373eb-bd59-4480-80b6-bd1b2427105b" path="/var/lib/kubelet/pods/cbb373eb-bd59-4480-80b6-bd1b2427105b/volumes" Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.383574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:54 crc kubenswrapper[4766]: W0130 16:41:54.410905 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19522cbf_c17c_411f_9732_986bd8ea5c1f.slice/crio-c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c WatchSource:0}: Error finding container c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c: Status 404 returned error can't find the container with id c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.885938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.886543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.886578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.887933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerStarted","Data":"ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.887976 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerStarted","Data":"c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c"} Jan 30 16:41:54 crc kubenswrapper[4766]: I0130 16:41:54.919356 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-clmnh-config-zx269" podStartSLOduration=1.919335322 podStartE2EDuration="1.919335322s" podCreationTimestamp="2026-01-30 16:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:41:54.910878555 +0000 UTC m=+1169.548835901" watchObservedRunningTime="2026-01-30 16:41:54.919335322 +0000 UTC m=+1169.557292668" Jan 30 16:41:55 crc kubenswrapper[4766]: I0130 16:41:55.900309 4766 generic.go:334] "Generic (PLEG): container finished" podID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerID="ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a" exitCode=0 Jan 30 16:41:55 crc kubenswrapper[4766]: I0130 16:41:55.900365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerDied","Data":"ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320"} Jan 30 16:41:56 crc kubenswrapper[4766]: I0130 16:41:56.951664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.459819 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545734 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545925 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.545992 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546098 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.546410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") pod \"19522cbf-c17c-411f-9732-986bd8ea5c1f\" (UID: \"19522cbf-c17c-411f-9732-986bd8ea5c1f\") " Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.547686 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.547715 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.549565 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts" (OuterVolumeSpecName: "scripts") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.549640 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run" (OuterVolumeSpecName: "var-run") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.550887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.558937 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv" (OuterVolumeSpecName: "kube-api-access-h92pv") pod "19522cbf-c17c-411f-9732-986bd8ea5c1f" (UID: "19522cbf-c17c-411f-9732-986bd8ea5c1f"). InnerVolumeSpecName "kube-api-access-h92pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649663 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92pv\" (UniqueName: \"kubernetes.io/projected/19522cbf-c17c-411f-9732-986bd8ea5c1f-kube-api-access-h92pv\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649708 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649720 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19522cbf-c17c-411f-9732-986bd8ea5c1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.649734 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19522cbf-c17c-411f-9732-986bd8ea5c1f-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962054 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh-config-zx269" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962099 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh-config-zx269" event={"ID":"19522cbf-c17c-411f-9732-986bd8ea5c1f","Type":"ContainerDied","Data":"c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.962141 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7412ef7490490406cd984094351044483b068f871fed9ebfbff7f36f589ba3c" Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.976968 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977012 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5"} Jan 30 16:41:57 crc kubenswrapper[4766]: I0130 16:41:57.977033 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerStarted","Data":"1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94"} Jan 30 16:41:58 crc kubenswrapper[4766]: I0130 16:41:58.546590 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:58 crc kubenswrapper[4766]: I0130 16:41:58.555943 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh-config-zx269"] Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.027166 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.531740307 podStartE2EDuration="44.027144359s" podCreationTimestamp="2026-01-30 16:41:15 +0000 UTC" firstStartedPulling="2026-01-30 16:41:49.549201852 +0000 UTC m=+1164.187159198" lastFinishedPulling="2026-01-30 16:41:56.044605904 +0000 UTC m=+1170.682563250" observedRunningTime="2026-01-30 16:41:59.020597384 +0000 UTC m=+1173.658554740" watchObservedRunningTime="2026-01-30 16:41:59.027144359 +0000 UTC m=+1173.665101705" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.335873 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:41:59 crc kubenswrapper[4766]: E0130 16:41:59.336706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.336801 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: E0130 16:41:59.336879 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.336940 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.337155 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9dd82ac-e512-442e-97c4-53be730affca" containerName="mariadb-account-create-update" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.337274 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" containerName="ovn-config" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.338257 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.341347 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.364003 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.482376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.482779 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483245 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483303 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.483341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584410 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.584599 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585738 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.585984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.609636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"dnsmasq-dns-77585f5f8c-jfh6l\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.666061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:41:59 crc kubenswrapper[4766]: I0130 16:41:59.770506 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:42:00 crc kubenswrapper[4766]: I0130 16:42:00.055199 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19522cbf-c17c-411f-9732-986bd8ea5c1f" path="/var/lib/kubelet/pods/19522cbf-c17c-411f-9732-986bd8ea5c1f/volumes" Jan 30 16:42:07 crc kubenswrapper[4766]: I0130 16:42:07.726907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.084979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerStarted","Data":"608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088845 4766 generic.go:334] "Generic (PLEG): container finished" podID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerID="3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7" exitCode=0 Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.088905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerStarted","Data":"12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d"} Jan 30 16:42:08 crc kubenswrapper[4766]: I0130 16:42:08.111595 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jpmx7" podStartSLOduration=2.544435285 podStartE2EDuration="20.111574868s" podCreationTimestamp="2026-01-30 16:41:48 +0000 UTC" firstStartedPulling="2026-01-30 16:41:49.826955131 +0000 UTC m=+1164.464912477" lastFinishedPulling="2026-01-30 16:42:07.394094714 +0000 UTC m=+1182.032052060" observedRunningTime="2026-01-30 16:42:08.102070363 +0000 UTC m=+1182.740027709" watchObservedRunningTime="2026-01-30 16:42:08.111574868 +0000 UTC m=+1182.749532214" Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.103802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerStarted","Data":"16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba"} Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.104443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:42:09 crc kubenswrapper[4766]: I0130 16:42:09.140778 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podStartSLOduration=10.140745723 podStartE2EDuration="10.140745723s" podCreationTimestamp="2026-01-30 16:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:09.130147028 +0000 UTC m=+1183.768104394" watchObservedRunningTime="2026-01-30 16:42:09.140745723 +0000 UTC m=+1183.778703069" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.152682 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.593608 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.595081 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.607683 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.692413 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.694262 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.696520 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.705531 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711905 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.711945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.813696 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816370 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.816601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.817994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.818091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.818406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.839065 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.864963 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"cinder-db-create-x95v6\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.878842 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"cinder-270a-account-create-update-d5mdk\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.912655 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.914863 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921085 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921406 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921831 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.921955 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.922565 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.931302 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.933486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.951560 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:10 crc kubenswrapper[4766]: I0130 16:42:10.966876 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.011806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023349 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.023592 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.026428 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.027642 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.033353 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.040907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125779 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.125819 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126025 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126099 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126136 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.126192 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.134015 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.135554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.136229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.144396 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.180157 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.227846 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228430 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228553 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.228646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.229065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.258647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.259807 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.260323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"keystone-db-sync-8p4hm\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.261901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"barbican-db-create-zf522\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " pod="openstack/barbican-db-create-zf522" Jan 30 16:42:11 crc kubenswrapper[4766]: I0130 16:42:11.265802 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"neutron-db-create-dksnn\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.330977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331071 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331127 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.331302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.332069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.332303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.359931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"barbican-66a8-account-create-update-wk4g8\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.363679 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"neutron-321b-account-create-update-fb9ws\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.402223 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.422885 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.439450 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.447483 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:11.631991 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.618737 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.626840 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:42:12 crc kubenswrapper[4766]: W0130 16:42:12.634661 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb242f466_9049_49a9_b655_b270790de9ce.slice/crio-c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c WatchSource:0}: Error finding container c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c: Status 404 returned error can't find the container with id c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.853503 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.891924 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:42:12 crc kubenswrapper[4766]: W0130 16:42:12.902763 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod199b8ae3_05c7_4785_9590_1cb06cce0013.slice/crio-70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a WatchSource:0}: Error finding container 70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a: Status 404 returned error can't find the container with id 70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.945218 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:42:12 crc kubenswrapper[4766]: I0130 16:42:12.990588 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.017220 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.182723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerStarted","Data":"02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.185778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerStarted","Data":"14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.187699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerStarted","Data":"c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.189345 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerStarted","Data":"9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.191564 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerStarted","Data":"75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.193327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerStarted","Data":"70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.197149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerStarted","Data":"b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.197302 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerStarted","Data":"a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820"} Jan 30 16:42:13 crc kubenswrapper[4766]: I0130 16:42:13.228563 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-x95v6" podStartSLOduration=3.228535213 podStartE2EDuration="3.228535213s" podCreationTimestamp="2026-01-30 16:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:13.217300892 +0000 UTC m=+1187.855258248" watchObservedRunningTime="2026-01-30 16:42:13.228535213 +0000 UTC m=+1187.866492559" Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.209082 4766 generic.go:334] "Generic (PLEG): container finished" podID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerID="b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.209475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerDied","Data":"b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.213313 4766 generic.go:334] "Generic (PLEG): container finished" podID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerID="384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.213370 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerDied","Data":"384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.217502 4766 generic.go:334] "Generic (PLEG): container finished" podID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerID="89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.217543 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerDied","Data":"89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.219513 4766 generic.go:334] "Generic (PLEG): container finished" podID="db058df5-07b8-4d6e-a646-48ac7105c516" containerID="3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.219562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerDied","Data":"3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.221091 4766 generic.go:334] "Generic (PLEG): container finished" podID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerID="46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.221139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerDied","Data":"46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.230397 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerDied","Data":"8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4"} Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.231530 4766 generic.go:334] "Generic (PLEG): container finished" podID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerID="8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4" exitCode=0 Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.668360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.734363 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:14 crc kubenswrapper[4766]: I0130 16:42:14.734645 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rghwg" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" containerID="cri-o://d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" gracePeriod=10 Jan 30 16:42:15 crc kubenswrapper[4766]: I0130 16:42:15.257365 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerID="d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" exitCode=0 Jan 30 16:42:15 crc kubenswrapper[4766]: I0130 16:42:15.257593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7"} Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.731423 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.740251 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.751136 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.758576 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.772658 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.784559 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.794978 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876778 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") pod \"199b8ae3-05c7-4785-9590-1cb06cce0013\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876843 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") pod \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") pod \"3747d6ac-f476-429b-83b8-c5a65a241d47\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") pod \"199b8ae3-05c7-4785-9590-1cb06cce0013\" (UID: \"199b8ae3-05c7-4785-9590-1cb06cce0013\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.876973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") pod \"db058df5-07b8-4d6e-a646-48ac7105c516\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877030 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") pod \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") pod \"db058df5-07b8-4d6e-a646-48ac7105c516\" (UID: \"db058df5-07b8-4d6e-a646-48ac7105c516\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") pod \"3747d6ac-f476-429b-83b8-c5a65a241d47\" (UID: \"3747d6ac-f476-429b-83b8-c5a65a241d47\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877165 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") pod \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\" (UID: \"81d680b3-ced9-4a2a-9a50-780e6239b4a5\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "199b8ae3-05c7-4785-9590-1cb06cce0013" (UID: "199b8ae3-05c7-4785-9590-1cb06cce0013"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81d680b3-ced9-4a2a-9a50-780e6239b4a5" (UID: "81d680b3-ced9-4a2a-9a50-780e6239b4a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877804 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3747d6ac-f476-429b-83b8-c5a65a241d47" (UID: "3747d6ac-f476-429b-83b8-c5a65a241d47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") pod \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\" (UID: \"10bcd3d7-2c30-4a51-9455-2ffed88a7f43\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.877953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db058df5-07b8-4d6e-a646-48ac7105c516" (UID: "db058df5-07b8-4d6e-a646-48ac7105c516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878505 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/199b8ae3-05c7-4785-9590-1cb06cce0013-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878528 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db058df5-07b8-4d6e-a646-48ac7105c516-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878536 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d680b3-ced9-4a2a-9a50-780e6239b4a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878545 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3747d6ac-f476-429b-83b8-c5a65a241d47-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.878601 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10bcd3d7-2c30-4a51-9455-2ffed88a7f43" (UID: "10bcd3d7-2c30-4a51-9455-2ffed88a7f43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.885496 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4" (OuterVolumeSpecName: "kube-api-access-wd6l4") pod "199b8ae3-05c7-4785-9590-1cb06cce0013" (UID: "199b8ae3-05c7-4785-9590-1cb06cce0013"). InnerVolumeSpecName "kube-api-access-wd6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.885548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t" (OuterVolumeSpecName: "kube-api-access-h9h7t") pod "3747d6ac-f476-429b-83b8-c5a65a241d47" (UID: "3747d6ac-f476-429b-83b8-c5a65a241d47"). InnerVolumeSpecName "kube-api-access-h9h7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.886551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj" (OuterVolumeSpecName: "kube-api-access-c29vj") pod "10bcd3d7-2c30-4a51-9455-2ffed88a7f43" (UID: "10bcd3d7-2c30-4a51-9455-2ffed88a7f43"). InnerVolumeSpecName "kube-api-access-c29vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.900966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs" (OuterVolumeSpecName: "kube-api-access-lbsrs") pod "db058df5-07b8-4d6e-a646-48ac7105c516" (UID: "db058df5-07b8-4d6e-a646-48ac7105c516"). InnerVolumeSpecName "kube-api-access-lbsrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.906960 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4" (OuterVolumeSpecName: "kube-api-access-4scx4") pod "81d680b3-ced9-4a2a-9a50-780e6239b4a5" (UID: "81d680b3-ced9-4a2a-9a50-780e6239b4a5"). InnerVolumeSpecName "kube-api-access-4scx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979869 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.979991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") pod \"1caad6ca-26a4-488c-8b03-90da40a955b0\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") pod \"1caad6ca-26a4-488c-8b03-90da40a955b0\" (UID: \"1caad6ca-26a4-488c-8b03-90da40a955b0\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980243 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") pod \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\" (UID: \"c4db25e7-718f-4a48-8dd2-2db2ae9e804c\") " Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980654 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbsrs\" (UniqueName: \"kubernetes.io/projected/db058df5-07b8-4d6e-a646-48ac7105c516-kube-api-access-lbsrs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980679 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4scx4\" (UniqueName: \"kubernetes.io/projected/81d680b3-ced9-4a2a-9a50-780e6239b4a5-kube-api-access-4scx4\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980689 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980700 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd6l4\" (UniqueName: \"kubernetes.io/projected/199b8ae3-05c7-4785-9590-1cb06cce0013-kube-api-access-wd6l4\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980710 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c29vj\" (UniqueName: \"kubernetes.io/projected/10bcd3d7-2c30-4a51-9455-2ffed88a7f43-kube-api-access-c29vj\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.980720 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9h7t\" (UniqueName: \"kubernetes.io/projected/3747d6ac-f476-429b-83b8-c5a65a241d47-kube-api-access-h9h7t\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.981802 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1caad6ca-26a4-488c-8b03-90da40a955b0" (UID: "1caad6ca-26a4-488c-8b03-90da40a955b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.984423 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs" (OuterVolumeSpecName: "kube-api-access-xbrfs") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "kube-api-access-xbrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:20 crc kubenswrapper[4766]: I0130 16:42:20.989547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7" (OuterVolumeSpecName: "kube-api-access-bqgb7") pod "1caad6ca-26a4-488c-8b03-90da40a955b0" (UID: "1caad6ca-26a4-488c-8b03-90da40a955b0"). InnerVolumeSpecName "kube-api-access-bqgb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.026407 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.027575 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.028825 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.038027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config" (OuterVolumeSpecName: "config") pod "c4db25e7-718f-4a48-8dd2-2db2ae9e804c" (UID: "c4db25e7-718f-4a48-8dd2-2db2ae9e804c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083669 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083703 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083715 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083725 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbrfs\" (UniqueName: \"kubernetes.io/projected/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-kube-api-access-xbrfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083736 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgb7\" (UniqueName: \"kubernetes.io/projected/1caad6ca-26a4-488c-8b03-90da40a955b0-kube-api-access-bqgb7\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083748 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1caad6ca-26a4-488c-8b03-90da40a955b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.083758 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4db25e7-718f-4a48-8dd2-2db2ae9e804c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342266 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x95v6" event={"ID":"1caad6ca-26a4-488c-8b03-90da40a955b0","Type":"ContainerDied","Data":"a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342554 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ab14890ae2c97c12d78d8e362cb2c1ad5f7d35b5f004e94864617693ecf820" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.342696 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x95v6" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.357674 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rghwg" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.358318 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rghwg" event={"ID":"c4db25e7-718f-4a48-8dd2-2db2ae9e804c","Type":"ContainerDied","Data":"9182f1033ef23024434f7951cc54bc1f7a26c4fcea86a6ac3668ac33be32ed89"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.358432 4766 scope.go:117] "RemoveContainer" containerID="d4d926b25f16af7c860cb7d5c7c75d1eb0c85c7438a98e36515485f9623090f7" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362036 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zf522" event={"ID":"81d680b3-ced9-4a2a-9a50-780e6239b4a5","Type":"ContainerDied","Data":"02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362126 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b5651ff390f182500384a7546a30e84e2a5edec6f1b0b62a8505aa9b31da57" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.362258 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zf522" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375443 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-321b-account-create-update-fb9ws" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375454 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-321b-account-create-update-fb9ws" event={"ID":"10bcd3d7-2c30-4a51-9455-2ffed88a7f43","Type":"ContainerDied","Data":"14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.375485 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d244d5b685b5ff7067f3a2cfc86300c87e8c2c380c2d83c5247b70aa7d686c" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.378980 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerStarted","Data":"88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-270a-account-create-update-d5mdk" event={"ID":"db058df5-07b8-4d6e-a646-48ac7105c516","Type":"ContainerDied","Data":"9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388202 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9656d34761b96b7aec15427a2a76d3ef9b7ff049df5dafee525596963bfa4aec" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.388286 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-270a-account-create-update-d5mdk" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-wk4g8" event={"ID":"3747d6ac-f476-429b-83b8-c5a65a241d47","Type":"ContainerDied","Data":"75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392605 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f824fa71f59e0128ce66d11b0cd6c6363a46c019ebc5a4072951734cae7447" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.392675 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-wk4g8" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.398685 4766 scope.go:117] "RemoveContainer" containerID="e50ccbe59f4a2cbb46a08d936a0c8b4ab930afea52bcfbf233b4a8e6a0125171" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.399302 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dksnn" event={"ID":"199b8ae3-05c7-4785-9590-1cb06cce0013","Type":"ContainerDied","Data":"70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a"} Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407917 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d102c081dbd39e2e62993ac6ada201d37f0ba346e8d6f7b4db3fd0a7480f1a" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.407936 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dksnn" Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.409328 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rghwg"] Jan 30 16:42:21 crc kubenswrapper[4766]: I0130 16:42:21.433117 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8p4hm" podStartSLOduration=3.029395202 podStartE2EDuration="11.433089922s" podCreationTimestamp="2026-01-30 16:42:10 +0000 UTC" firstStartedPulling="2026-01-30 16:42:12.641758024 +0000 UTC m=+1187.279715370" lastFinishedPulling="2026-01-30 16:42:21.045452744 +0000 UTC m=+1195.683410090" observedRunningTime="2026-01-30 16:42:21.416884247 +0000 UTC m=+1196.054841603" watchObservedRunningTime="2026-01-30 16:42:21.433089922 +0000 UTC m=+1196.071047268" Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.050616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" path="/var/lib/kubelet/pods/c4db25e7-718f-4a48-8dd2-2db2ae9e804c/volumes" Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.418428 4766 generic.go:334] "Generic (PLEG): container finished" podID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerID="608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161" exitCode=0 Jan 30 16:42:22 crc kubenswrapper[4766]: I0130 16:42:22.418517 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerDied","Data":"608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161"} Jan 30 16:42:23 crc kubenswrapper[4766]: I0130 16:42:23.978615 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.141689 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.141888 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.142185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.142209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") pod \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\" (UID: \"42d1f0ba-d11c-4e08-9e01-5783f42a6b84\") " Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.150497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs" (OuterVolumeSpecName: "kube-api-access-dfprs") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "kube-api-access-dfprs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.152915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.169672 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.193352 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data" (OuterVolumeSpecName: "config-data") pod "42d1f0ba-d11c-4e08-9e01-5783f42a6b84" (UID: "42d1f0ba-d11c-4e08-9e01-5783f42a6b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.243850 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.243887 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.244086 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfprs\" (UniqueName: \"kubernetes.io/projected/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-kube-api-access-dfprs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.244102 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d1f0ba-d11c-4e08-9e01-5783f42a6b84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.388492 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-rghwg" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435594 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpmx7" event={"ID":"42d1f0ba-d11c-4e08-9e01-5783f42a6b84","Type":"ContainerDied","Data":"156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59"} Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435638 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156259d42ec5bb7cdf5b66d3e56d10fcf3255030f0fe6e860e8d86caf0aded59" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.435667 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpmx7" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.852469 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853276 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853301 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853318 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="init" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853326 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="init" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853346 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853357 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853372 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853379 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853391 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853398 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853418 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853427 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853448 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853456 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853472 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: E0130 16:42:24.853492 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853501 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853717 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853739 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853750 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4db25e7-718f-4a48-8dd2-2db2ae9e804c" containerName="dnsmasq-dns" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853770 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853782 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" containerName="mariadb-database-create" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853796 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" containerName="mariadb-account-create-update" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.853810 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" containerName="glance-db-sync" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.854886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.887294 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956850 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.956989 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.957028 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:24 crc kubenswrapper[4766]: I0130 16:42:24.957047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.059445 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.060808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.061104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.062121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.080209 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"dnsmasq-dns-7ff5475cc9-kch9t\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.185164 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:25 crc kubenswrapper[4766]: I0130 16:42:25.443575 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:25 crc kubenswrapper[4766]: W0130 16:42:25.455432 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52befca_b3ab_4e81_bc0f_c828a8bdc49b.slice/crio-ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c WatchSource:0}: Error finding container ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c: Status 404 returned error can't find the container with id ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.453645 4766 generic.go:334] "Generic (PLEG): container finished" podID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" exitCode=0 Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.453757 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0"} Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.454025 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerStarted","Data":"ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c"} Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.457111 4766 generic.go:334] "Generic (PLEG): container finished" podID="b242f466-9049-49a9-b655-b270790de9ce" containerID="88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00" exitCode=0 Jan 30 16:42:26 crc kubenswrapper[4766]: I0130 16:42:26.457168 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerDied","Data":"88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00"} Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.467936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerStarted","Data":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.497444 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" podStartSLOduration=3.497420496 podStartE2EDuration="3.497420496s" podCreationTimestamp="2026-01-30 16:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:27.490703996 +0000 UTC m=+1202.128661352" watchObservedRunningTime="2026-01-30 16:42:27.497420496 +0000 UTC m=+1202.135377862" Jan 30 16:42:27 crc kubenswrapper[4766]: I0130 16:42:27.827535 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015156 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.015373 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") pod \"b242f466-9049-49a9-b655-b270790de9ce\" (UID: \"b242f466-9049-49a9-b655-b270790de9ce\") " Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.021003 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp" (OuterVolumeSpecName: "kube-api-access-gwzpp") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "kube-api-access-gwzpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.037943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.067483 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data" (OuterVolumeSpecName: "config-data") pod "b242f466-9049-49a9-b655-b270790de9ce" (UID: "b242f466-9049-49a9-b655-b270790de9ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117866 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwzpp\" (UniqueName: \"kubernetes.io/projected/b242f466-9049-49a9-b655-b270790de9ce-kube-api-access-gwzpp\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117895 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.117905 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f466-9049-49a9-b655-b270790de9ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.476952 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8p4hm" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8p4hm" event={"ID":"b242f466-9049-49a9-b655-b270790de9ce","Type":"ContainerDied","Data":"c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c"} Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477376 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8288f2c3105e402da6e0f989ee1c689d8725bfc649cd0e05bef6f7830c2ab0c" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.477433 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.822969 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.861731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:28 crc kubenswrapper[4766]: E0130 16:42:28.862108 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862128 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b242f466-9049-49a9-b655-b270790de9ce" containerName="keystone-db-sync" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.862871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872230 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872798 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.872956 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.887649 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.888738 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.928772 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.930230 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:28 crc kubenswrapper[4766]: I0130 16:42:28.956321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.058815 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059106 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059312 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059696 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059746 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.059924 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.111807 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.113005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.126705 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.126872 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.127287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d97nd" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161451 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161508 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.161728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.163718 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.164849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.166060 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.177943 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.183861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.188801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.197757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.226780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.228348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.235363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"keystone-bootstrap-hbqvh\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.270015 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"dnsmasq-dns-5c5cc7c5ff-ll29f\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271256 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.271284 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.297276 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.299018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.309519 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.310006 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.314518 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rbvkd" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.343256 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.368684 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.370002 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372480 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372524 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372612 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372658 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.372749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386613 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386838 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fh4lz" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.386996 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.414886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.427537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.468460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"neutron-db-sync-sc6rp\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.473233 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474349 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474384 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474579 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.474729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.478515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.489634 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491554 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491598 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.491745 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.504085 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.504721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.507780 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.515951 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.518197 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.525828 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.536828 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.557258 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.557929 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.583262 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"cinder-db-sync-rxmkt\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.603911 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.604737 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.612119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.622053 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.653596 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.653610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"placement-db-sync-mq5sq\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.656505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.684665 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.686539 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709518 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709676 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709744 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.709996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.730956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.744341 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.747511 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.751151 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47zjc" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.758569 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.766058 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.775812 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813321 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813825 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.813942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.814031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815555 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815638 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815791 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815885 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.815962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816122 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816260 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.816515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.822425 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.823859 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.857231 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.857502 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.860487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.868801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.870110 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.874452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"ceilometer-0\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.908239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925713 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925739 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.925897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.926860 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.927340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.927717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.928069 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.933325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.962714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"dnsmasq-dns-8b5c85b87-jlsp7\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.994089 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:29 crc kubenswrapper[4766]: I0130 16:42:29.995840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.008629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009248 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009509 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6xjc8" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009676 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.009878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.029833 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.034555 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.034638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.037784 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.063885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"barbican-db-sync-zgzf5\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.100326 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.108922 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.111688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.116286 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.116644 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.133923 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135701 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135761 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.135975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239665 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239716 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239843 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239973 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.239990 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240757 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.240838 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.241399 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.241680 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.245365 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.247037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.250045 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.262327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.263361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.295188 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.304464 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22fc62b3_3a89_44ec_8f23_4182b363478c.slice/crio-cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e WatchSource:0}: Error finding container cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e: Status 404 returned error can't find the container with id cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.314110 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.317841 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.326113 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.342679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.344087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.344388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.345021 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.349052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.349395 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.352990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.370092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.378486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.397519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.480349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.533055 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" containerID="cri-o://956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" gracePeriod=10 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.533087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerStarted","Data":"cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e"} Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.593646 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.605209 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020df37b_56f5_4f59_8c96_faaea5bb7e27.slice/crio-bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8 WatchSource:0}: Error finding container bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8: Status 404 returned error can't find the container with id bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.606350 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.617409 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:42:30 crc kubenswrapper[4766]: W0130 16:42:30.673169 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc27037_152a_461b_bce1_6d37b38bbb95.slice/crio-fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282 WatchSource:0}: Error finding container fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282: Status 404 returned error can't find the container with id fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282 Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.805000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:30 crc kubenswrapper[4766]: I0130 16:42:30.824369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.050118 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.053035 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:42:31 crc kubenswrapper[4766]: W0130 16:42:31.112889 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ccb2d3_4270_48e3_99cc_6031edfa30ae.slice/crio-de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26 WatchSource:0}: Error finding container de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26: Status 404 returned error can't find the container with id de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.214439 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.354905 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.479007 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.479730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480259 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480836 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.480953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") pod \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\" (UID: \"b52befca-b3ab-4e81-bc0f-c828a8bdc49b\") " Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.512631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx" (OuterVolumeSpecName: "kube-api-access-22lvx") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "kube-api-access-22lvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.586232 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lvx\" (UniqueName: \"kubernetes.io/projected/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-kube-api-access-22lvx\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592322 4766 generic.go:334] "Generic (PLEG): container finished" podID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" exitCode=0 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" event={"ID":"b52befca-b3ab-4e81-bc0f-c828a8bdc49b","Type":"ContainerDied","Data":"ac637fb7fa3a5a2ceefaf2b57d6fb0986a7fd9542b5a8336144c4521b7ec6f8c"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592419 4766 scope.go:117] "RemoveContainer" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.592547 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kch9t" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.598604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerStarted","Data":"de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.606818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617205 4766 generic.go:334] "Generic (PLEG): container finished" podID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerID="f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991" exitCode=0 Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617275 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerDied","Data":"f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.617301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerStarted","Data":"bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.629282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerStarted","Data":"c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.629330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerStarted","Data":"fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.641851 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.643550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerStarted","Data":"e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.648949 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerStarted","Data":"7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.653548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerStarted","Data":"229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.662997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerStarted","Data":"486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.673406 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"49e6a264688b5efa68e5dd3bb58dc0b650db2a13ee17de4b4093f263fc716ec3"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.690426 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.690471 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: W0130 16:42:31.695298 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89845731_1ffc_4f79_a979_d83068cebc2a.slice/crio-8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d WatchSource:0}: Error finding container 8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d: Status 404 returned error can't find the container with id 8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.695450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"80541219c3010f86d328821046e3eb93ce24469ac922b57c41a30f77d511e82f"} Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.732422 4766 scope.go:117] "RemoveContainer" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.759375 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sc6rp" podStartSLOduration=2.759350858 podStartE2EDuration="2.759350858s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:31.668126731 +0000 UTC m=+1206.306084077" watchObservedRunningTime="2026-01-30 16:42:31.759350858 +0000 UTC m=+1206.397308204" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.760236 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.767296 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hbqvh" podStartSLOduration=3.7672767609999998 podStartE2EDuration="3.767276761s" podCreationTimestamp="2026-01-30 16:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:31.703048208 +0000 UTC m=+1206.341005554" watchObservedRunningTime="2026-01-30 16:42:31.767276761 +0000 UTC m=+1206.405234107" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.768707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.780493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config" (OuterVolumeSpecName: "config") pod "b52befca-b3ab-4e81-bc0f-c828a8bdc49b" (UID: "b52befca-b3ab-4e81-bc0f-c828a8bdc49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798457 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798490 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:31 crc kubenswrapper[4766]: I0130 16:42:31.798501 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b52befca-b3ab-4e81-bc0f-c828a8bdc49b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.250257 4766 scope.go:117] "RemoveContainer" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:32 crc kubenswrapper[4766]: E0130 16:42:32.250964 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": container with ID starting with 956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0 not found: ID does not exist" containerID="956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.251014 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0"} err="failed to get container status \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": rpc error: code = NotFound desc = could not find container \"956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0\": container with ID starting with 956170968e42ad47b2cafd397e206fd1268906e156bbd44d9f8ca6e3f5096ee0 not found: ID does not exist" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.251043 4766 scope.go:117] "RemoveContainer" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:32 crc kubenswrapper[4766]: E0130 16:42:32.255999 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": container with ID starting with 4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0 not found: ID does not exist" containerID="4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.256046 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0"} err="failed to get container status \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": rpc error: code = NotFound desc = could not find container \"4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0\": container with ID starting with 4088f4fd9d06fcf827d9109fe4a2bcec3a2991bdc54db993295bb9b7219e61c0 not found: ID does not exist" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.615299 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.712050 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kch9t"] Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.758653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.760329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" event={"ID":"020df37b-56f5-4f59-8c96-faaea5bb7e27","Type":"ContainerDied","Data":"bbefff24f39dbadacd598d64ee407c71a5ea9986cb075543865724ab87f304f8"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.760373 4766 scope.go:117] "RemoveContainer" containerID="f3e8472abbbcf843661882d9d161476828c357dd15048dd6266dd09765622991" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.807705 4766 generic.go:334] "Generic (PLEG): container finished" podID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerID="23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81" exitCode=0 Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.807766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.816413 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.822611 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b"} Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855732 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855800 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855894 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855960 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.855994 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") pod \"020df37b-56f5-4f59-8c96-faaea5bb7e27\" (UID: \"020df37b-56f5-4f59-8c96-faaea5bb7e27\") " Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.906189 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z" (OuterVolumeSpecName: "kube-api-access-mgd4z") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "kube-api-access-mgd4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.958513 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgd4z\" (UniqueName: \"kubernetes.io/projected/020df37b-56f5-4f59-8c96-faaea5bb7e27-kube-api-access-mgd4z\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.974069 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.988389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.988572 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:32 crc kubenswrapper[4766]: I0130 16:42:32.992765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config" (OuterVolumeSpecName: "config") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.007442 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "020df37b-56f5-4f59-8c96-faaea5bb7e27" (UID: "020df37b-56f5-4f59-8c96-faaea5bb7e27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066677 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066961 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066970 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066979 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.066988 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/020df37b-56f5-4f59-8c96-faaea5bb7e27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.619684 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.781333 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.795502 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.863855 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f"} Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.865821 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-ll29f" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.876954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerStarted","Data":"05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754"} Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.878287 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:33 crc kubenswrapper[4766]: I0130 16:42:33.998461 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" podStartSLOduration=4.998439894 podStartE2EDuration="4.998439894s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:33.916511207 +0000 UTC m=+1208.554468543" watchObservedRunningTime="2026-01-30 16:42:33.998439894 +0000 UTC m=+1208.636397240" Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.036093 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.100216 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" path="/var/lib/kubelet/pods/b52befca-b3ab-4e81-bc0f-c828a8bdc49b/volumes" Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.105964 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-ll29f"] Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerStarted","Data":"4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161"} Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891136 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" containerID="cri-o://12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" gracePeriod=30 Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.891232 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" containerID="cri-o://4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" gracePeriod=30 Jan 30 16:42:34 crc kubenswrapper[4766]: I0130 16:42:34.920917 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.920889955 podStartE2EDuration="6.920889955s" podCreationTimestamp="2026-01-30 16:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:34.911158735 +0000 UTC m=+1209.549116101" watchObservedRunningTime="2026-01-30 16:42:34.920889955 +0000 UTC m=+1209.558847311" Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926081 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerStarted","Data":"6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8"} Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926381 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" containerID="cri-o://05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" gracePeriod=30 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.926949 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" containerID="cri-o://6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" gracePeriod=30 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.945441 4766 generic.go:334] "Generic (PLEG): container finished" podID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerID="4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" exitCode=143 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.945476 4766 generic.go:334] "Generic (PLEG): container finished" podID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerID="12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" exitCode=143 Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.946278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161"} Jan 30 16:42:35 crc kubenswrapper[4766]: I0130 16:42:35.946341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.089690 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.089668454 podStartE2EDuration="7.089668454s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:42:35.955522746 +0000 UTC m=+1210.593480092" watchObservedRunningTime="2026-01-30 16:42:36.089668454 +0000 UTC m=+1210.727625800" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.093516 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" path="/var/lib/kubelet/pods/020df37b-56f5-4f59-8c96-faaea5bb7e27/volumes" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.380359 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438009 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438077 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438266 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438297 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438318 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438340 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438375 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.438422 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") pod \"2654a202-1ccf-4de3-90bf-3bc6f15de239\" (UID: \"2654a202-1ccf-4de3-90bf-3bc6f15de239\") " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.439761 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs" (OuterVolumeSpecName: "logs") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.440376 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.447425 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts" (OuterVolumeSpecName: "scripts") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.459849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.468570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj" (OuterVolumeSpecName: "kube-api-access-97hjj") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "kube-api-access-97hjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.482106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.510721 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.517459 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data" (OuterVolumeSpecName: "config-data") pod "2654a202-1ccf-4de3-90bf-3bc6f15de239" (UID: "2654a202-1ccf-4de3-90bf-3bc6f15de239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.543963 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544012 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hjj\" (UniqueName: \"kubernetes.io/projected/2654a202-1ccf-4de3-90bf-3bc6f15de239-kube-api-access-97hjj\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544057 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544071 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544083 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544127 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2654a202-1ccf-4de3-90bf-3bc6f15de239-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544140 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.544152 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2654a202-1ccf-4de3-90bf-3bc6f15de239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.560240 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.645652 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957697 4766 generic.go:334] "Generic (PLEG): container finished" podID="89845731-1ffc-4f79-a979-d83068cebc2a" containerID="6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" exitCode=0 Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957731 4766 generic.go:334] "Generic (PLEG): container finished" podID="89845731-1ffc-4f79-a979-d83068cebc2a" containerID="05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" exitCode=143 Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.957811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2654a202-1ccf-4de3-90bf-3bc6f15de239","Type":"ContainerDied","Data":"49e6a264688b5efa68e5dd3bb58dc0b650db2a13ee17de4b4093f263fc716ec3"} Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961228 4766 scope.go:117] "RemoveContainer" containerID="4fbb211752ea890c4ddb2cfff8ec0c8175e951ec7d5658df94ce295047ab2161" Jan 30 16:42:36 crc kubenswrapper[4766]: I0130 16:42:36.961257 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.010710 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.024277 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036395 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036884 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036919 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036940 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036949 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036971 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.036980 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.036999 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: E0130 16:42:37.037019 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037026 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037247 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-log" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037277 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" containerName="glance-httpd" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037291 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52befca-b3ab-4e81-bc0f-c828a8bdc49b" containerName="dnsmasq-dns" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.037313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="020df37b-56f5-4f59-8c96-faaea5bb7e27" containerName="init" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.038335 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.047050 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.053397 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.061377 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161861 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.161964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162114 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.162938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264091 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264246 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264278 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.264547 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.265519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.265617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.274857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.275116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.276283 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.279001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.286080 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.295556 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " pod="openstack/glance-default-external-api-0" Jan 30 16:42:37 crc kubenswrapper[4766]: I0130 16:42:37.360284 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:42:38 crc kubenswrapper[4766]: I0130 16:42:38.053702 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2654a202-1ccf-4de3-90bf-3bc6f15de239" path="/var/lib/kubelet/pods/2654a202-1ccf-4de3-90bf-3bc6f15de239/volumes" Jan 30 16:42:39 crc kubenswrapper[4766]: I0130 16:42:39.990361 4766 generic.go:334] "Generic (PLEG): container finished" podID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerID="486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57" exitCode=0 Jan 30 16:42:39 crc kubenswrapper[4766]: I0130 16:42:39.990534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerDied","Data":"486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57"} Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.011465 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.085218 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.085860 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" containerID="cri-o://16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" gracePeriod=10 Jan 30 16:42:40 crc kubenswrapper[4766]: I0130 16:42:40.912555 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.023668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89845731-1ffc-4f79-a979-d83068cebc2a","Type":"ContainerDied","Data":"8398be31fd1c1dbaac0a47e8ca9fd7d89f84dea6a8b9da4892e60534d152611d"} Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.023795 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.030311 4766 generic.go:334] "Generic (PLEG): container finished" podID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerID="16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" exitCode=0 Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.030569 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba"} Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061421 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061628 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061665 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061803 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.061918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"89845731-1ffc-4f79-a979-d83068cebc2a\" (UID: \"89845731-1ffc-4f79-a979-d83068cebc2a\") " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs" (OuterVolumeSpecName: "logs") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062509 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.062734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.074335 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.092575 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh" (OuterVolumeSpecName: "kube-api-access-lmsmh") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "kube-api-access-lmsmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.092888 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts" (OuterVolumeSpecName: "scripts") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.119263 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.155431 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166564 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166626 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166637 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89845731-1ffc-4f79-a979-d83068cebc2a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166647 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.166655 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.168200 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmsmh\" (UniqueName: \"kubernetes.io/projected/89845731-1ffc-4f79-a979-d83068cebc2a-kube-api-access-lmsmh\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.170404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data" (OuterVolumeSpecName: "config-data") pod "89845731-1ffc-4f79-a979-d83068cebc2a" (UID: "89845731-1ffc-4f79-a979-d83068cebc2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.197596 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.269979 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.270022 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89845731-1ffc-4f79-a979-d83068cebc2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.450672 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.458555 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474270 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: E0130 16:42:41.474662 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474676 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: E0130 16:42:41.474693 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474699 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474863 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-log" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.474879 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" containerName="glance-httpd" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.475831 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.475926 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.488225 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.505011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593918 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.593949 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594102 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.594143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696222 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696366 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696504 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.696526 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.697907 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.698407 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.698660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.705823 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.708949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.716130 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.724418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.725543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.773735 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:42:41 crc kubenswrapper[4766]: I0130 16:42:41.820737 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:42:42 crc kubenswrapper[4766]: I0130 16:42:42.080102 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89845731-1ffc-4f79-a979-d83068cebc2a" path="/var/lib/kubelet/pods/89845731-1ffc-4f79-a979-d83068cebc2a/volumes" Jan 30 16:42:49 crc kubenswrapper[4766]: I0130 16:42:49.667759 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:54 crc kubenswrapper[4766]: I0130 16:42:54.669223 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:59 crc kubenswrapper[4766]: I0130 16:42:59.670282 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:42:59 crc kubenswrapper[4766]: I0130 16:42:59.671046 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.182514 4766 scope.go:117] "RemoveContainer" containerID="12fc3e700a602c61f6d7095c65bbcc8d24d4b615d031b5becb78070ca50a6e0b" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.210997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbqvh" event={"ID":"22fc62b3-3a89-44ec-8f23-4182b363478c","Type":"ContainerDied","Data":"cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e"} Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.211045 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8dd33dc29b2286c115159871279671c13b4e68f9e215e5899370d3d4a8576e" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.215953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" event={"ID":"5be49188-9169-438f-a8df-6bd5d8dd29fd","Type":"ContainerDied","Data":"12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d"} Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.215994 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12785cb0c22675855895839970651119da7335d185eeab854fc2e6552f272d1d" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.283559 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.283773 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k75sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-mq5sq_openstack(83c08adc-cebc-4bff-8994-d8f1f0cb59d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:01 crc kubenswrapper[4766]: E0130 16:43:01.285863 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-mq5sq" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.294358 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.300333 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387218 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.387537 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388150 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388198 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") pod \"5be49188-9169-438f-a8df-6bd5d8dd29fd\" (UID: \"5be49188-9169-438f-a8df-6bd5d8dd29fd\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.388227 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") pod \"22fc62b3-3a89-44ec-8f23-4182b363478c\" (UID: \"22fc62b3-3a89-44ec-8f23-4182b363478c\") " Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.394709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv" (OuterVolumeSpecName: "kube-api-access-nmfnv") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "kube-api-access-nmfnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.396520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.397570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts" (OuterVolumeSpecName: "scripts") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.398741 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.417380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r" (OuterVolumeSpecName: "kube-api-access-nxp2r") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "kube-api-access-nxp2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.441941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.444826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data" (OuterVolumeSpecName: "config-data") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.445372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22fc62b3-3a89-44ec-8f23-4182b363478c" (UID: "22fc62b3-3a89-44ec-8f23-4182b363478c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.447454 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config" (OuterVolumeSpecName: "config") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.449039 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.457032 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.464112 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5be49188-9169-438f-a8df-6bd5d8dd29fd" (UID: "5be49188-9169-438f-a8df-6bd5d8dd29fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492199 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492251 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492262 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492362 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492378 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492393 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxp2r\" (UniqueName: \"kubernetes.io/projected/22fc62b3-3a89-44ec-8f23-4182b363478c-kube-api-access-nxp2r\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492408 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492468 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492504 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492515 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5be49188-9169-438f-a8df-6bd5d8dd29fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492526 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fc62b3-3a89-44ec-8f23-4182b363478c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:01 crc kubenswrapper[4766]: I0130 16:43:01.492538 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmfnv\" (UniqueName: \"kubernetes.io/projected/5be49188-9169-438f-a8df-6bd5d8dd29fd-kube-api-access-nmfnv\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.225101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.225383 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbqvh" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.229937 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-mq5sq" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.276531 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.285214 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-jfh6l"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.431640 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.454604 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hbqvh"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522328 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522735 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="init" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522753 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="init" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522766 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522773 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: E0130 16:43:02.522787 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522794 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.522978 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" containerName="keystone-bootstrap" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.523014 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.523971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530204 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530495 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.530966 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.532134 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.532432 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.537110 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612708 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612732 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.612840 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714292 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714542 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.714566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.718992 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.720066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.721485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.721897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.732227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.733334 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"keystone-bootstrap-2jkw8\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:02 crc kubenswrapper[4766]: I0130 16:43:02.850165 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.049752 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fc62b3-3a89-44ec-8f23-4182b363478c" path="/var/lib/kubelet/pods/22fc62b3-3a89-44ec-8f23-4182b363478c/volumes" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.050326 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" path="/var/lib/kubelet/pods/5be49188-9169-438f-a8df-6bd5d8dd29fd/volumes" Jan 30 16:43:04 crc kubenswrapper[4766]: I0130 16:43:04.671982 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-jfh6l" podUID="5be49188-9169-438f-a8df-6bd5d8dd29fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.238321 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.239052 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g5xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zgzf5_openstack(ad8b317f-6f81-4ac9-a854-7b71e384ed98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.240315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zgzf5" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" Jan 30 16:43:10 crc kubenswrapper[4766]: E0130 16:43:10.304493 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-zgzf5" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.018230 4766 scope.go:117] "RemoveContainer" containerID="6ca8dc52678762b9a6731937231aea93b115df4bd946ac847c87adee0d67eba8" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.039344 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.039520 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2627,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rxmkt_openstack(3a05e847-bb50-49ab-821d-e2432c0f01e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.040747 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rxmkt" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.171784 4766 scope.go:117] "RemoveContainer" containerID="05f83c6743616a1a228900808a01d7d7df378d9a76d8d0157d86c6fa042c029f" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.343867 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} Jan 30 16:43:14 crc kubenswrapper[4766]: E0130 16:43:14.353970 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rxmkt" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.550508 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:43:14 crc kubenswrapper[4766]: W0130 16:43:14.572665 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59eff57d_cb92_4c52_aad2_6e43b3908fd4.slice/crio-a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2 WatchSource:0}: Error finding container a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2: Status 404 returned error can't find the container with id a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2 Jan 30 16:43:14 crc kubenswrapper[4766]: I0130 16:43:14.696657 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.238406 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.353100 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.353161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"323ddb58f9d31b5bc758e9920b4b5a6270bffb075aa3aec77b37c8af05f7ec01"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.354557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerStarted","Data":"fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.354595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerStarted","Data":"a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2"} Jan 30 16:43:15 crc kubenswrapper[4766]: I0130 16:43:15.377678 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2jkw8" podStartSLOduration=13.377659578 podStartE2EDuration="13.377659578s" podCreationTimestamp="2026-01-30 16:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:15.370385123 +0000 UTC m=+1250.008342469" watchObservedRunningTime="2026-01-30 16:43:15.377659578 +0000 UTC m=+1250.015616924" Jan 30 16:43:15 crc kubenswrapper[4766]: W0130 16:43:15.502248 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f88e91_eb62_45a5_bfcb_d38a918e23da.slice/crio-935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7 WatchSource:0}: Error finding container 935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7: Status 404 returned error can't find the container with id 935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7 Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.371645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.377489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerStarted","Data":"3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.381678 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.381920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7"} Jan 30 16:43:16 crc kubenswrapper[4766]: I0130 16:43:16.404578 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=35.404558272 podStartE2EDuration="35.404558272s" podCreationTimestamp="2026-01-30 16:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:16.398079257 +0000 UTC m=+1251.036036603" watchObservedRunningTime="2026-01-30 16:43:16.404558272 +0000 UTC m=+1251.042515618" Jan 30 16:43:17 crc kubenswrapper[4766]: I0130 16:43:17.393162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerStarted","Data":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} Jan 30 16:43:17 crc kubenswrapper[4766]: I0130 16:43:17.425779 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=40.425756351 podStartE2EDuration="40.425756351s" podCreationTimestamp="2026-01-30 16:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:17.411294313 +0000 UTC m=+1252.049251669" watchObservedRunningTime="2026-01-30 16:43:17.425756351 +0000 UTC m=+1252.063713697" Jan 30 16:43:18 crc kubenswrapper[4766]: I0130 16:43:18.404409 4766 generic.go:334] "Generic (PLEG): container finished" podID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerID="fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508" exitCode=0 Jan 30 16:43:18 crc kubenswrapper[4766]: I0130 16:43:18.404476 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerDied","Data":"fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508"} Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.821329 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.821743 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.850381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.860221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.874588 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.955796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.955984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956139 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956401 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.956587 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") pod \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\" (UID: \"59eff57d-cb92-4c52-aad2-6e43b3908fd4\") " Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961011 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961406 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts" (OuterVolumeSpecName: "scripts") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.961592 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq" (OuterVolumeSpecName: "kube-api-access-d9kbq") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "kube-api-access-d9kbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.963016 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.990866 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data" (OuterVolumeSpecName: "config-data") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:21 crc kubenswrapper[4766]: I0130 16:43:21.993637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59eff57d-cb92-4c52-aad2-6e43b3908fd4" (UID: "59eff57d-cb92-4c52-aad2-6e43b3908fd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.059831 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060347 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060362 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060373 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060386 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9kbq\" (UniqueName: \"kubernetes.io/projected/59eff57d-cb92-4c52-aad2-6e43b3908fd4-kube-api-access-d9kbq\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.060397 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59eff57d-cb92-4c52-aad2-6e43b3908fd4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450250 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2jkw8" event={"ID":"59eff57d-cb92-4c52-aad2-6e43b3908fd4","Type":"ContainerDied","Data":"a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450296 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8649f87a88174d17b6487719f5885622c831c9443f1da0c32da65d70df7cac2" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.450260 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2jkw8" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.453003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457032 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerStarted","Data":"d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8"} Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457114 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.457139 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.487549 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mq5sq" podStartSLOduration=2.401719007 podStartE2EDuration="53.487523576s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.839743703 +0000 UTC m=+1205.477701049" lastFinishedPulling="2026-01-30 16:43:21.925548262 +0000 UTC m=+1256.563505618" observedRunningTime="2026-01-30 16:43:22.485648545 +0000 UTC m=+1257.123605941" watchObservedRunningTime="2026-01-30 16:43:22.487523576 +0000 UTC m=+1257.125480962" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967262 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:22 crc kubenswrapper[4766]: E0130 16:43:22.967724 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967740 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.967993 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" containerName="keystone-bootstrap" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.968679 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.971249 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.986154 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987252 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987860 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:43:22 crc kubenswrapper[4766]: I0130 16:43:22.987933 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftsn6" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:22.999853 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088477 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088572 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088682 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.088713 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190518 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190616 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190678 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.190724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.196948 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.197087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.198132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.199747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.200658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.201365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.202714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.219651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"keystone-7bc6f65df6-mx4xk\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.292528 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:23 crc kubenswrapper[4766]: I0130 16:43:23.794880 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.480275 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.480674 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482512 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerStarted","Data":"7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188"} Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482595 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.482607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerStarted","Data":"f7e59fee20a8c8c4ebf0975c2f9adc338f4c7ce8ad17f7e1383af919425199ff"} Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.512256 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7bc6f65df6-mx4xk" podStartSLOduration=2.511870112 podStartE2EDuration="2.511870112s" podCreationTimestamp="2026-01-30 16:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:24.501916455 +0000 UTC m=+1259.139873801" watchObservedRunningTime="2026-01-30 16:43:24.511870112 +0000 UTC m=+1259.149827458" Jan 30 16:43:24 crc kubenswrapper[4766]: E0130 16:43:24.602451 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c08adc_cebc_4bff_8994_d8f1f0cb59d7.slice/crio-d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.776975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:24 crc kubenswrapper[4766]: I0130 16:43:24.777937 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.493227 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerStarted","Data":"41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3"} Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.497802 4766 generic.go:334] "Generic (PLEG): container finished" podID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerID="d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8" exitCode=0 Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.497938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerDied","Data":"d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8"} Jan 30 16:43:25 crc kubenswrapper[4766]: I0130 16:43:25.545824 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zgzf5" podStartSLOduration=2.623225568 podStartE2EDuration="56.545805484s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:31.031497736 +0000 UTC m=+1205.669455082" lastFinishedPulling="2026-01-30 16:43:24.954077662 +0000 UTC m=+1259.592034998" observedRunningTime="2026-01-30 16:43:25.514491494 +0000 UTC m=+1260.152448840" watchObservedRunningTime="2026-01-30 16:43:25.545805484 +0000 UTC m=+1260.183762830" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.517246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerStarted","Data":"590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7"} Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.545976 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rxmkt" podStartSLOduration=2.67032561 podStartE2EDuration="57.545957069s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.598517512 +0000 UTC m=+1205.236474858" lastFinishedPulling="2026-01-30 16:43:25.474148971 +0000 UTC m=+1260.112106317" observedRunningTime="2026-01-30 16:43:26.537962405 +0000 UTC m=+1261.175919751" watchObservedRunningTime="2026-01-30 16:43:26.545957069 +0000 UTC m=+1261.183914415" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.929345 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960654 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.960863 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.961048 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.964642 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs" (OuterVolumeSpecName: "logs") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.969281 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts" (OuterVolumeSpecName: "scripts") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.971229 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk" (OuterVolumeSpecName: "kube-api-access-k75sk") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "kube-api-access-k75sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:26 crc kubenswrapper[4766]: E0130 16:43:26.990516 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data podName:83c08adc-cebc-4bff-8994-d8f1f0cb59d7 nodeName:}" failed. No retries permitted until 2026-01-30 16:43:27.490486292 +0000 UTC m=+1262.128443638 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7") : error deleting /var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volume-subpaths: remove /var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volume-subpaths: no such file or directory Jan 30 16:43:26 crc kubenswrapper[4766]: I0130 16:43:26.993224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065771 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065809 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065823 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75sk\" (UniqueName: \"kubernetes.io/projected/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-kube-api-access-k75sk\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.065841 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.360752 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.360804 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.392865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.413641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.528452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mq5sq" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mq5sq" event={"ID":"83c08adc-cebc-4bff-8994-d8f1f0cb59d7","Type":"ContainerDied","Data":"7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f"} Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530766 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7caac3e0c06feb794717f6f40765ed2205ff79a69ccdb722b91c767580ccb20f" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.530796 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.531063 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.574316 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") pod \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\" (UID: \"83c08adc-cebc-4bff-8994-d8f1f0cb59d7\") " Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.578501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data" (OuterVolumeSpecName: "config-data") pod "83c08adc-cebc-4bff-8994-d8f1f0cb59d7" (UID: "83c08adc-cebc-4bff-8994-d8f1f0cb59d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.677314 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c08adc-cebc-4bff-8994-d8f1f0cb59d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.691986 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:27 crc kubenswrapper[4766]: E0130 16:43:27.692419 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.692450 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.692681 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" containerName="placement-db-sync" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.693982 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.696331 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.697705 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.715414 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.778975 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779048 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.779289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881803 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881877 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881921 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.881988 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.882126 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.884010 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888136 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.888753 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.892870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.900169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:27 crc kubenswrapper[4766]: I0130 16:43:27.919621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"placement-69d8797fb6-zzsfd\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:28 crc kubenswrapper[4766]: I0130 16:43:28.019428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.546412 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.546963 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.653647 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:43:29 crc kubenswrapper[4766]: I0130 16:43:29.663365 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.308406 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.569427 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"e94bea3a22075449c7ce733d15ed50c31bf49ec686272c0a7961479d9194b9c6"} Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.576876 4766 generic.go:334] "Generic (PLEG): container finished" podID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerID="41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3" exitCode=0 Jan 30 16:43:32 crc kubenswrapper[4766]: I0130 16:43:32.576921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerDied","Data":"41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588072 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerStarted","Data":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588475 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588255 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" containerID="cri-o://2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588169 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" containerID="cri-o://8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588298 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" containerID="cri-o://eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.588259 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" containerID="cri-o://29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" gracePeriod=30 Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerStarted","Data":"13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64"} Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.592639 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.635668 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.131124551 podStartE2EDuration="1m4.635643766s" podCreationTimestamp="2026-01-30 16:42:29 +0000 UTC" firstStartedPulling="2026-01-30 16:42:30.830167056 +0000 UTC m=+1205.468124402" lastFinishedPulling="2026-01-30 16:43:32.334686271 +0000 UTC m=+1266.972643617" observedRunningTime="2026-01-30 16:43:33.622890913 +0000 UTC m=+1268.260848309" watchObservedRunningTime="2026-01-30 16:43:33.635643766 +0000 UTC m=+1268.273601152" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.656172 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69d8797fb6-zzsfd" podStartSLOduration=6.656145545 podStartE2EDuration="6.656145545s" podCreationTimestamp="2026-01-30 16:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:33.647654858 +0000 UTC m=+1268.285612224" watchObservedRunningTime="2026-01-30 16:43:33.656145545 +0000 UTC m=+1268.294102891" Jan 30 16:43:33 crc kubenswrapper[4766]: I0130 16:43:33.918946 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.016786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") pod \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\" (UID: \"ad8b317f-6f81-4ac9-a854-7b71e384ed98\") " Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.023191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs" (OuterVolumeSpecName: "kube-api-access-6g5xs") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "kube-api-access-6g5xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.023596 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.043285 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad8b317f-6f81-4ac9-a854-7b71e384ed98" (UID: "ad8b317f-6f81-4ac9-a854-7b71e384ed98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119486 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119542 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g5xs\" (UniqueName: \"kubernetes.io/projected/ad8b317f-6f81-4ac9-a854-7b71e384ed98-kube-api-access-6g5xs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.119562 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad8b317f-6f81-4ac9-a854-7b71e384ed98-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604728 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" exitCode=0 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604777 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" exitCode=2 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604799 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" exitCode=0 Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.604917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606717 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zgzf5" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zgzf5" event={"ID":"ad8b317f-6f81-4ac9-a854-7b71e384ed98","Type":"ContainerDied","Data":"e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731"} Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.606846 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09f31873ccd116f2a3b1ef9422cf9428666d4cb02bc17d4466e621c29db9731" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.880729 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:34 crc kubenswrapper[4766]: E0130 16:43:34.881088 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.881102 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.881417 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" containerName="barbican-db-sync" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.882274 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.885755 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47zjc" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.886256 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.886487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.930050 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.997621 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:34 crc kubenswrapper[4766]: I0130 16:43:34.999028 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.004558 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.028559 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035354 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035748 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.035837 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.137905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.137981 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138047 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138280 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138425 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.138499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.144144 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.144788 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.148921 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.165630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.165946 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.166750 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.179957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.180494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"barbican-worker-d6c45fdd9-srlkx\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.227477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240697 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240728 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240776 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.240974 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.241037 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.241114 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.242774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.252061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.252753 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.253390 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.259739 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.261267 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.266479 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.270903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"barbican-keystone-listener-5c649fd446-flqwn\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.290762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.332586 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343434 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343679 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.343878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.345268 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.345949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.347707 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.351697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.351954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.373651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"dnsmasq-dns-59d5ff467f-czb2k\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.446752 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447776 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.447871 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.448487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.452652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.452744 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.453087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.474336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"barbican-api-869cbffcd-4n87d\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.629748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.641827 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:35 crc kubenswrapper[4766]: W0130 16:43:35.785529 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13e6f63_37d4_4780_9902_430a9669901c.slice/crio-2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409 WatchSource:0}: Error finding container 2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409: Status 404 returned error can't find the container with id 2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409 Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.788505 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:43:35 crc kubenswrapper[4766]: W0130 16:43:35.847157 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3 WatchSource:0}: Error finding container c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3: Status 404 returned error can't find the container with id c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3 Jan 30 16:43:35 crc kubenswrapper[4766]: I0130 16:43:35.849402 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.108800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:36 crc kubenswrapper[4766]: W0130 16:43:36.112472 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee1aefba_bd2e_47f2_832c_7e74e707ad69.slice/crio-0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81 WatchSource:0}: Error finding container 0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81: Status 404 returned error can't find the container with id 0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81 Jan 30 16:43:36 crc kubenswrapper[4766]: W0130 16:43:36.114766 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0217e5_bcc8_482c_9e44_4be03ee7d059.slice/crio-92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf WatchSource:0}: Error finding container 92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf: Status 404 returned error can't find the container with id 92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.115949 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.651549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.654464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.656916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.656958 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663064 4766 generic.go:334] "Generic (PLEG): container finished" podID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerID="9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d" exitCode=0 Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d"} Jan 30 16:43:36 crc kubenswrapper[4766]: I0130 16:43:36.663154 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerStarted","Data":"0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.685516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerStarted","Data":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688020 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerID="590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7" exitCode=0 Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerDied","Data":"590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7"} Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688683 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.688867 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:37 crc kubenswrapper[4766]: I0130 16:43:37.774070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-869cbffcd-4n87d" podStartSLOduration=2.774046564 podStartE2EDuration="2.774046564s" podCreationTimestamp="2026-01-30 16:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:37.749566707 +0000 UTC m=+1272.387524073" watchObservedRunningTime="2026-01-30 16:43:37.774046564 +0000 UTC m=+1272.412003910" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.123421 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.125719 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.128084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.128287 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.133116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.134706 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211544 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211590 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211641 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211798 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.211942 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") pod \"14501411-a443-4f68-93ed-4cadcbc48b9f\" (UID: \"14501411-a443-4f68-93ed-4cadcbc48b9f\") " Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212275 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212330 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212372 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.212457 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.213239 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.213337 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.233064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s" (OuterVolumeSpecName: "kube-api-access-hr64s") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "kube-api-access-hr64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.239864 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.242755 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts" (OuterVolumeSpecName: "scripts") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.287205 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314377 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314517 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314736 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314760 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314810 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr64s\" (UniqueName: \"kubernetes.io/projected/14501411-a443-4f68-93ed-4cadcbc48b9f-kube-api-access-hr64s\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314824 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314837 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.314884 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14501411-a443-4f68-93ed-4cadcbc48b9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.315752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.316669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data" (OuterVolumeSpecName: "config-data") pod "14501411-a443-4f68-93ed-4cadcbc48b9f" (UID: "14501411-a443-4f68-93ed-4cadcbc48b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.318120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.319614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.320110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.320895 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.328921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.331516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"barbican-api-7b946b75c8-zb6q6\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.416354 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14501411-a443-4f68-93ed-4cadcbc48b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.508428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.717278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.717758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerStarted","Data":"712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731042 4766 generic.go:334] "Generic (PLEG): container finished" podID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" exitCode=0 Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731149 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14501411-a443-4f68-93ed-4cadcbc48b9f","Type":"ContainerDied","Data":"80541219c3010f86d328821046e3eb93ce24469ac922b57c41a30f77d511e82f"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731223 4766 scope.go:117] "RemoveContainer" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.731396 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.749733 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podStartSLOduration=3.149230515 podStartE2EDuration="4.749711052s" podCreationTimestamp="2026-01-30 16:43:34 +0000 UTC" firstStartedPulling="2026-01-30 16:43:35.85046251 +0000 UTC m=+1270.488419856" lastFinishedPulling="2026-01-30 16:43:37.450943047 +0000 UTC m=+1272.088900393" observedRunningTime="2026-01-30 16:43:38.741520733 +0000 UTC m=+1273.379478079" watchObservedRunningTime="2026-01-30 16:43:38.749711052 +0000 UTC m=+1273.387668398" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.752630 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerID="c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405" exitCode=0 Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.752719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerDied","Data":"c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.770274 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerStarted","Data":"bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.770407 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.774573 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.774667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerStarted","Data":"e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6"} Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.819918 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" podStartSLOduration=3.819898765 podStartE2EDuration="3.819898765s" podCreationTimestamp="2026-01-30 16:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:38.798148401 +0000 UTC m=+1273.436105757" watchObservedRunningTime="2026-01-30 16:43:38.819898765 +0000 UTC m=+1273.457856111" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.845596 4766 scope.go:117] "RemoveContainer" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.847290 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podStartSLOduration=3.186033602 podStartE2EDuration="4.847268729s" podCreationTimestamp="2026-01-30 16:43:34 +0000 UTC" firstStartedPulling="2026-01-30 16:43:35.789732031 +0000 UTC m=+1270.427689387" lastFinishedPulling="2026-01-30 16:43:37.450967168 +0000 UTC m=+1272.088924514" observedRunningTime="2026-01-30 16:43:38.820534242 +0000 UTC m=+1273.458491598" watchObservedRunningTime="2026-01-30 16:43:38.847268729 +0000 UTC m=+1273.485226075" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.879883 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.892256 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.900427 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901000 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901016 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901030 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901036 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901052 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901058 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.901074 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901080 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901242 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-central-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901271 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="sg-core" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901289 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="proxy-httpd" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.901304 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" containerName="ceilometer-notification-agent" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.903578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.904657 4766 scope.go:117] "RemoveContainer" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.908008 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.908319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.913869 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.950210 4766 scope.go:117] "RemoveContainer" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976279 4766 scope.go:117] "RemoveContainer" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.976841 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": container with ID starting with 2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498 not found: ID does not exist" containerID="2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976879 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498"} err="failed to get container status \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": rpc error: code = NotFound desc = could not find container \"2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498\": container with ID starting with 2dd4b41c08df4639b3a0f7331cc05e4486025741bd175061a52afd1a5dbed498 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.976926 4766 scope.go:117] "RemoveContainer" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.977276 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": container with ID starting with eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616 not found: ID does not exist" containerID="eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977299 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616"} err="failed to get container status \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": rpc error: code = NotFound desc = could not find container \"eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616\": container with ID starting with eafc3f5c849b2dd0f1f5325be3c9f10da539dc0774b466b2e2ee8e51194a6616 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977320 4766 scope.go:117] "RemoveContainer" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.977651 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": container with ID starting with 29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61 not found: ID does not exist" containerID="29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977715 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61"} err="failed to get container status \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": rpc error: code = NotFound desc = could not find container \"29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61\": container with ID starting with 29f2e1e9d854f93be08c13178f68a83ac09bfae5d8769fe9b88954c95b0e3d61 not found: ID does not exist" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.977733 4766 scope.go:117] "RemoveContainer" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: E0130 16:43:38.979785 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": container with ID starting with 8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc not found: ID does not exist" containerID="8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc" Jan 30 16:43:38 crc kubenswrapper[4766]: I0130 16:43:38.979847 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc"} err="failed to get container status \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": rpc error: code = NotFound desc = could not find container \"8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc\": container with ID starting with 8a28e1297786ace5ddf8a0acc683ca89f4eebb3aa58a5204b9dc553eb9ab1afc not found: ID does not exist" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:38.999980 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:43:39 crc kubenswrapper[4766]: W0130 16:43:39.009341 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17d6e828_fc05_46cb_9bee_bac08ebf331a.slice/crio-d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0 WatchSource:0}: Error finding container d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0: Status 404 returned error can't find the container with id d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0 Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028762 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.028917 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029093 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029320 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.029345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.045046 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.045099 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131891 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.131985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132086 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.132168 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.133121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.135028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.141391 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.141944 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.142298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.150316 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.159383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"ceilometer-0\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.228886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.378811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436305 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436518 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436585 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.436621 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") pod \"3a05e847-bb50-49ab-821d-e2432c0f01e9\" (UID: \"3a05e847-bb50-49ab-821d-e2432c0f01e9\") " Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.438783 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.441728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.457050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627" (OuterVolumeSpecName: "kube-api-access-q2627") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "kube-api-access-q2627". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.459494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts" (OuterVolumeSpecName: "scripts") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.484380 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.504609 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data" (OuterVolumeSpecName: "config-data") pod "3a05e847-bb50-49ab-821d-e2432c0f01e9" (UID: "3a05e847-bb50-49ab-821d-e2432c0f01e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540805 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a05e847-bb50-49ab-821d-e2432c0f01e9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540838 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2627\" (UniqueName: \"kubernetes.io/projected/3a05e847-bb50-49ab-821d-e2432c0f01e9-kube-api-access-q2627\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540849 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540861 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540870 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.540878 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05e847-bb50-49ab-821d-e2432c0f01e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.729857 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.785423 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"cd2c2b2506c59c114c23d0ceb86a25fba0633c14ce109f4881053f349d4e17dc"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789197 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerStarted","Data":"d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789884 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.789981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792676 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rxmkt" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rxmkt" event={"ID":"3a05e847-bb50-49ab-821d-e2432c0f01e9","Type":"ContainerDied","Data":"229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf"} Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.792925 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="229d0980cc7e5e26832bda068f3b2059b081d7bd956f13cd9eecf8d3a512baaf" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.975044 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b946b75c8-zb6q6" podStartSLOduration=1.975023837 podStartE2EDuration="1.975023837s" podCreationTimestamp="2026-01-30 16:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:39.823202575 +0000 UTC m=+1274.461160021" watchObservedRunningTime="2026-01-30 16:43:39.975023837 +0000 UTC m=+1274.612981183" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.979612 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:39 crc kubenswrapper[4766]: E0130 16:43:39.980011 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.980027 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.980281 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" containerName="cinder-db-sync" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.981454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.983405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.983972 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.984086 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:43:39 crc kubenswrapper[4766]: I0130 16:43:39.984841 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rbvkd" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.006792 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050361 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050407 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050541 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.050593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.058268 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14501411-a443-4f68-93ed-4cadcbc48b9f" path="/var/lib/kubelet/pods/14501411-a443-4f68-93ed-4cadcbc48b9f/volumes" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.059334 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.081893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.084199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.108734 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162476 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162511 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162555 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162687 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162711 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162810 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162851 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.162914 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.163012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.167828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.172104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.172960 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.178467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.191935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"cinder-scheduler-0\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264659 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.264762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.265804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.266445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.268229 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.268898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.272608 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.273906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.284627 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"dnsmasq-dns-69c986f6d7-wtv6m\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.316824 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.318957 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: E0130 16:43:40.319368 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.319388 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.319597 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" containerName="neutron-db-sync" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.320589 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.328196 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.331152 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366135 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") pod \"4bc27037-152a-461b-bce1-6d37b38bbb95\" (UID: \"4bc27037-152a-461b-bce1-6d37b38bbb95\") " Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366428 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366459 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366503 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366545 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.366636 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.372591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw" (OuterVolumeSpecName: "kube-api-access-ql6bw") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "kube-api-access-ql6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.409424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config" (OuterVolumeSpecName: "config") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.426734 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc27037-152a-461b-bce1-6d37b38bbb95" (UID: "4bc27037-152a-461b-bce1-6d37b38bbb95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.430985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.468991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.481256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.481857 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482392 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482423 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482617 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482630 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql6bw\" (UniqueName: \"kubernetes.io/projected/4bc27037-152a-461b-bce1-6d37b38bbb95-kube-api-access-ql6bw\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482641 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc27037-152a-461b-bce1-6d37b38bbb95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482759 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.482806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.490049 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.490760 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.491129 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"cinder-api-0\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.769979 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817621 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sc6rp" event={"ID":"4bc27037-152a-461b-bce1-6d37b38bbb95","Type":"ContainerDied","Data":"fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282"} Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817639 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sc6rp" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.817693 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbfa8e0065a380d3ba4a52bbdffd41bedf11875edae24ac7fb676379d4ea282" Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.820336 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.820471 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" containerID="cri-o://bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" gracePeriod=10 Jan 30 16:43:40 crc kubenswrapper[4766]: I0130 16:43:40.914049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.025660 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.059613 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.136813 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.138554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.195706 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217241 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217298 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.217331 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.327398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328578 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.328700 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.329467 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.329752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.332360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.335229 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.336698 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.337574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.338494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.339404 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.343870 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.344206 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.346668 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.356770 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.376513 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d97nd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.394104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"dnsmasq-dns-5784cf869f-689xd\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431360 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431409 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.431551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.477006 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533704 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533796 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.533816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.544203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.549525 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.550207 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.550853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.564657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"neutron-5995f74f66-6c62l\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.789843 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.869682 4766 generic.go:334] "Generic (PLEG): container finished" podID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerID="bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" exitCode=0 Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.869786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.871888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"6b7b6fbe45be35df26ed12004dacb8c6bf29682f09f9e1548db68481d831f9f3"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.876937 4766 generic.go:334] "Generic (PLEG): container finished" podID="e0cf707d-1c30-442d-8430-e714bd68752a" containerID="315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b" exitCode=0 Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.877012 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerDied","Data":"315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.877050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerStarted","Data":"d9494d16b1950242e2d85088ae6e45881e6fe2494c0a57e45b5cbe2dedb19001"} Jan 30 16:43:41 crc kubenswrapper[4766]: I0130 16:43:41.895324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"c8586f92647bbb5a114dcd6f6899c5036c3e271083fa860bf64d7866744bcc76"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.011079 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:43:42 crc kubenswrapper[4766]: W0130 16:43:42.081857 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9443ad_23f2_4953_8fe3_1e30cddbb3ae.slice/crio-4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146 WatchSource:0}: Error finding container 4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146: Status 404 returned error can't find the container with id 4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146 Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.082631 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.149868 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150229 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150453 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.150469 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") pod \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\" (UID: \"ee1aefba-bd2e-47f2-832c-7e74e707ad69\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.157780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl" (OuterVolumeSpecName: "kube-api-access-8xqcl") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "kube-api-access-8xqcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.247296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config" (OuterVolumeSpecName: "config") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.255711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267052 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267086 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xqcl\" (UniqueName: \"kubernetes.io/projected/ee1aefba-bd2e-47f2-832c-7e74e707ad69-kube-api-access-8xqcl\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.267095 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.289732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.314279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.318614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee1aefba-bd2e-47f2-832c-7e74e707ad69" (UID: "ee1aefba-bd2e-47f2-832c-7e74e707ad69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371756 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371795 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.371811 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee1aefba-bd2e-47f2-832c-7e74e707ad69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.544563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.678078 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.685982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686257 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.686345 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") pod \"e0cf707d-1c30-442d-8430-e714bd68752a\" (UID: \"e0cf707d-1c30-442d-8430-e714bd68752a\") " Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.699432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk" (OuterVolumeSpecName: "kube-api-access-vchwk") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "kube-api-access-vchwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.712862 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.728410 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.746760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.766773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config" (OuterVolumeSpecName: "config") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788027 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788070 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vchwk\" (UniqueName: \"kubernetes.io/projected/e0cf707d-1c30-442d-8430-e714bd68752a-kube-api-access-vchwk\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788087 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788097 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.788105 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.791823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0cf707d-1c30-442d-8430-e714bd68752a" (UID: "e0cf707d-1c30-442d-8430-e714bd68752a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.890754 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0cf707d-1c30-442d-8430-e714bd68752a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919284 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" event={"ID":"ee1aefba-bd2e-47f2-832c-7e74e707ad69","Type":"ContainerDied","Data":"0f05a6427a4592a4fbfb38f5c67f5bbead27aa40c290d9321f78dc9bf122aa81"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919337 4766 scope.go:117] "RemoveContainer" containerID="bc9352799004a876d938ff5e3475c63a67cb821e31390ecd3667042de650c4b3" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.919447 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-czb2k" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.939466 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.942462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" event={"ID":"e0cf707d-1c30-442d-8430-e714bd68752a","Type":"ContainerDied","Data":"d9494d16b1950242e2d85088ae6e45881e6fe2494c0a57e45b5cbe2dedb19001"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.942551 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-wtv6m" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.953499 4766 generic.go:334] "Generic (PLEG): container finished" podID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerID="4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf" exitCode=0 Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.954188 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.954269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerStarted","Data":"4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.963581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"e2b7b271b357b586463753be91e6e23e2c8d157467dd4ac8a1278aee093a63d3"} Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.965586 4766 scope.go:117] "RemoveContainer" containerID="9711eddd329c1e89a7dc01097b8376ca2746bf25cefdc64b1de7bcd30e1ecb4d" Jan 30 16:43:42 crc kubenswrapper[4766]: I0130 16:43:42.987802 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c"} Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.044330 4766 scope.go:117] "RemoveContainer" containerID="315d1474b9459e278c79e38256369dd5ba88d8a22915ed4e5c5210722342361b" Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.057303 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.064252 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-wtv6m"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.072315 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.075758 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-czb2k"] Jan 30 16:43:43 crc kubenswrapper[4766]: I0130 16:43:43.563918 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.006567 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.008554 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.008613 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerStarted","Data":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.012708 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerStarted","Data":"9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.012930 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" containerID="cri-o://672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" gracePeriod=30 Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.013792 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" containerID="cri-o://9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" gracePeriod=30 Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.013253 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.023405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.035560 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.051865 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" path="/var/lib/kubelet/pods/e0cf707d-1c30-442d-8430-e714bd68752a/volumes" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.055855 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" path="/var/lib/kubelet/pods/ee1aefba-bd2e-47f2-832c-7e74e707ad69/volumes" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.056545 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.056572 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerStarted","Data":"c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2"} Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.057019 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5995f74f66-6c62l" podStartSLOduration=3.057002343 podStartE2EDuration="3.057002343s" podCreationTimestamp="2026-01-30 16:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.036031 +0000 UTC m=+1278.673988346" watchObservedRunningTime="2026-01-30 16:43:44.057002343 +0000 UTC m=+1278.694959689" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.066601 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.066581109 podStartE2EDuration="4.066581109s" podCreationTimestamp="2026-01-30 16:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.05691421 +0000 UTC m=+1278.694871556" watchObservedRunningTime="2026-01-30 16:43:44.066581109 +0000 UTC m=+1278.704538455" Jan 30 16:43:44 crc kubenswrapper[4766]: I0130 16:43:44.084987 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-689xd" podStartSLOduration=3.084970643 podStartE2EDuration="3.084970643s" podCreationTimestamp="2026-01-30 16:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:44.082415464 +0000 UTC m=+1278.720372810" watchObservedRunningTime="2026-01-30 16:43:44.084970643 +0000 UTC m=+1278.722927989" Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.066669 4766 generic.go:334] "Generic (PLEG): container finished" podID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerID="672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" exitCode=143 Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.067013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c"} Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.073652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerStarted","Data":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.099144 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.496741795 podStartE2EDuration="6.099122704s" podCreationTimestamp="2026-01-30 16:43:39 +0000 UTC" firstStartedPulling="2026-01-30 16:43:40.943581125 +0000 UTC m=+1275.581538471" lastFinishedPulling="2026-01-30 16:43:42.545962034 +0000 UTC m=+1277.183919380" observedRunningTime="2026-01-30 16:43:45.096841982 +0000 UTC m=+1279.734799348" watchObservedRunningTime="2026-01-30 16:43:45.099122704 +0000 UTC m=+1279.737080050" Jan 30 16:43:45 crc kubenswrapper[4766]: I0130 16:43:45.318120 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.104170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerStarted","Data":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.104992 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.140416 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.145888479 podStartE2EDuration="8.140396822s" podCreationTimestamp="2026-01-30 16:43:38 +0000 UTC" firstStartedPulling="2026-01-30 16:43:39.691160563 +0000 UTC m=+1274.329117909" lastFinishedPulling="2026-01-30 16:43:45.685668906 +0000 UTC m=+1280.323626252" observedRunningTime="2026-01-30 16:43:46.126071768 +0000 UTC m=+1280.764029114" watchObservedRunningTime="2026-01-30 16:43:46.140396822 +0000 UTC m=+1280.778354168" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901497 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.901937 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901956 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.901989 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.901998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: E0130 16:43:46.902016 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902025 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902280 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cf707d-1c30-442d-8430-e714bd68752a" containerName="init" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.902302 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1aefba-bd2e-47f2-832c-7e74e707ad69" containerName="dnsmasq-dns" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.903868 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.905982 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.908084 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.920012 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985862 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985960 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.985994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:46 crc kubenswrapper[4766]: I0130 16:43:46.986022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087718 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087782 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.087815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.088944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.089029 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.093426 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.094128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.095029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.098095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.107414 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.108380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.110790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"neutron-6d4bdf9c45-5nxgr\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.232308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.341700 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.863454 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:43:47 crc kubenswrapper[4766]: I0130 16:43:47.922949 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:48 crc kubenswrapper[4766]: I0130 16:43:48.139319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666"} Jan 30 16:43:48 crc kubenswrapper[4766]: I0130 16:43:48.139372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"c0a3cd47bf6f73c69d465e105e571ff0dfdead63ace53c2387dc41608358f285"} Jan 30 16:43:49 crc kubenswrapper[4766]: I0130 16:43:49.144260 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerStarted","Data":"7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863"} Jan 30 16:43:49 crc kubenswrapper[4766]: I0130 16:43:49.146068 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.114710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.146696 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d4bdf9c45-5nxgr" podStartSLOduration=4.146673536 podStartE2EDuration="4.146673536s" podCreationTimestamp="2026-01-30 16:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:49.17331078 +0000 UTC m=+1283.811268136" watchObservedRunningTime="2026-01-30 16:43:50.146673536 +0000 UTC m=+1284.784630892" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.315415 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.396362 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.396588 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" containerID="cri-o://bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" gracePeriod=30 Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.397009 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" containerID="cri-o://997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" gracePeriod=30 Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.654617 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:43:50 crc kubenswrapper[4766]: I0130 16:43:50.700192 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161140 4766 generic.go:334] "Generic (PLEG): container finished" podID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" exitCode=143 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161534 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" containerID="cri-o://a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" gracePeriod=30 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.161847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.163326 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" containerID="cri-o://f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" gracePeriod=30 Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.480470 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.561995 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:51 crc kubenswrapper[4766]: I0130 16:43:51.562237 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" containerID="cri-o://05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" gracePeriod=10 Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.170854 4766 generic.go:334] "Generic (PLEG): container finished" podID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerID="05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" exitCode=0 Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171112 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754"} Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" event={"ID":"a7ccb2d3-4270-48e3-99cc-6031edfa30ae","Type":"ContainerDied","Data":"de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26"} Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.171233 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de52ee7d6da539ff2915615ec98d46f519fe75c68b787c9ed63b8db673bf3c26" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.182364 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335103 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335507 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.335526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") pod \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\" (UID: \"a7ccb2d3-4270-48e3-99cc-6031edfa30ae\") " Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.343774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct" (OuterVolumeSpecName: "kube-api-access-wdnct") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "kube-api-access-wdnct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.404875 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.408605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config" (OuterVolumeSpecName: "config") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.413224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.415514 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438870 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438926 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438938 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdnct\" (UniqueName: \"kubernetes.io/projected/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-kube-api-access-wdnct\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438955 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.438965 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.445670 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7ccb2d3-4270-48e3-99cc-6031edfa30ae" (UID: "a7ccb2d3-4270-48e3-99cc-6031edfa30ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:52 crc kubenswrapper[4766]: I0130 16:43:52.541308 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7ccb2d3-4270-48e3-99cc-6031edfa30ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.182821 4766 generic.go:334] "Generic (PLEG): container finished" podID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" exitCode=0 Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.182950 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jlsp7" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.192046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.215958 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.224817 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jlsp7"] Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.278034 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.545996 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:48772->10.217.0.155:9311: read: connection reset by peer" Jan 30 16:43:53 crc kubenswrapper[4766]: I0130 16:43:53.546619 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-869cbffcd-4n87d" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:48770->10.217.0.155:9311: read: connection reset by peer" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.010763 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.052385 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" path="/var/lib/kubelet/pods/a7ccb2d3-4270-48e3-99cc-6031edfa30ae/volumes" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175092 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175217 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175410 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175478 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") pod \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\" (UID: \"6c0217e5-bcc8-482c-9e44-4be03ee7d059\") " Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.175976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs" (OuterVolumeSpecName: "logs") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.182913 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5" (OuterVolumeSpecName: "kube-api-access-4kcg5") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "kube-api-access-4kcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.190439 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.215405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218765 4766 generic.go:334] "Generic (PLEG): container finished" podID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" exitCode=0 Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-869cbffcd-4n87d" event={"ID":"6c0217e5-bcc8-482c-9e44-4be03ee7d059","Type":"ContainerDied","Data":"92d1aaa2960ed19f9dead271c07bcadcb09aafba2b36e05ba013dc148c76ebbf"} Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218866 4766 scope.go:117] "RemoveContainer" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.218999 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-869cbffcd-4n87d" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.244361 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data" (OuterVolumeSpecName: "config-data") pod "6c0217e5-bcc8-482c-9e44-4be03ee7d059" (UID: "6c0217e5-bcc8-482c-9e44-4be03ee7d059"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278393 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kcg5\" (UniqueName: \"kubernetes.io/projected/6c0217e5-bcc8-482c-9e44-4be03ee7d059-kube-api-access-4kcg5\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278456 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278467 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0217e5-bcc8-482c-9e44-4be03ee7d059-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.278479 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0217e5-bcc8-482c-9e44-4be03ee7d059-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.347636 4766 scope.go:117] "RemoveContainer" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.367469 4766 scope.go:117] "RemoveContainer" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: E0130 16:43:54.368370 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": container with ID starting with 997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45 not found: ID does not exist" containerID="997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.368498 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45"} err="failed to get container status \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": rpc error: code = NotFound desc = could not find container \"997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45\": container with ID starting with 997bcdb331587a6a7b0af6004a1001be1e445ca8d6604c747e6d479bac0d7b45 not found: ID does not exist" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.368589 4766 scope.go:117] "RemoveContainer" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: E0130 16:43:54.368990 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": container with ID starting with bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0 not found: ID does not exist" containerID="bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.369027 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0"} err="failed to get container status \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": rpc error: code = NotFound desc = could not find container \"bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0\": container with ID starting with bb7f2daa116cebbbf2cc73455a4ae3436f99d4ad876646c055fc1ee90da5e4c0 not found: ID does not exist" Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.551555 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.559559 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-869cbffcd-4n87d"] Jan 30 16:43:54 crc kubenswrapper[4766]: I0130 16:43:54.999824 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:43:55 crc kubenswrapper[4766]: E0130 16:43:55.394511 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24818215_6fcc_4a45_8f7c_4f65e993eb7d.slice/crio-a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24818215_6fcc_4a45_8f7c_4f65e993eb7d.slice/crio-conmon-a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.725797 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809154 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.809424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810120 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.810719 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24818215-6fcc-4a45-8f7c-4f65e993eb7d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.819418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.819470 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r" (OuterVolumeSpecName: "kube-api-access-jvf5r") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "kube-api-access-jvf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.820729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts" (OuterVolumeSpecName: "scripts") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.871355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data" (OuterVolumeSpecName: "config-data") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") pod \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\" (UID: \"24818215-6fcc-4a45-8f7c-4f65e993eb7d\") " Jan 30 16:43:55 crc kubenswrapper[4766]: W0130 16:43:55.912527 4766 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/24818215-6fcc-4a45-8f7c-4f65e993eb7d/volumes/kubernetes.io~secret/config-data Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912551 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data" (OuterVolumeSpecName: "config-data") pod "24818215-6fcc-4a45-8f7c-4f65e993eb7d" (UID: "24818215-6fcc-4a45-8f7c-4f65e993eb7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912849 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912880 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912892 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912903 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24818215-6fcc-4a45-8f7c-4f65e993eb7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:55 crc kubenswrapper[4766]: I0130 16:43:55.912915 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvf5r\" (UniqueName: \"kubernetes.io/projected/24818215-6fcc-4a45-8f7c-4f65e993eb7d-kube-api-access-jvf5r\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.051214 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" path="/var/lib/kubelet/pods/6c0217e5-bcc8-482c-9e44-4be03ee7d059/volumes" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245384 4766 generic.go:334] "Generic (PLEG): container finished" podID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" exitCode=0 Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245447 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245473 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"24818215-6fcc-4a45-8f7c-4f65e993eb7d","Type":"ContainerDied","Data":"6b7b6fbe45be35df26ed12004dacb8c6bf29682f09f9e1548db68481d831f9f3"} Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245477 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.245488 4766 scope.go:117] "RemoveContainer" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.278438 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.281657 4766 scope.go:117] "RemoveContainer" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.310465 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321281 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321691 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321705 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321721 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321727 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321736 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321743 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321766 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321771 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321783 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="init" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="init" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.321800 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321806 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321977 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ccb2d3-4270-48e3-99cc-6031edfa30ae" containerName="dnsmasq-dns" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.321992 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322004 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="probe" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322013 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0217e5-bcc8-482c-9e44-4be03ee7d059" containerName="barbican-api-log" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.322026 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" containerName="cinder-scheduler" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.323009 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.338471 4766 scope.go:117] "RemoveContainer" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.341279 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": container with ID starting with f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e not found: ID does not exist" containerID="f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341313 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e"} err="failed to get container status \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": rpc error: code = NotFound desc = could not find container \"f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e\": container with ID starting with f6f621ee7a9bda91392bd4eba1aef2c1a325f5a982ced4f20f15f94074a8ba5e not found: ID does not exist" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341333 4766 scope.go:117] "RemoveContainer" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.341795 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:56 crc kubenswrapper[4766]: E0130 16:43:56.345155 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": container with ID starting with a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a not found: ID does not exist" containerID="a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.345442 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a"} err="failed to get container status \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": rpc error: code = NotFound desc = could not find container \"a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a\": container with ID starting with a4a70788d20ee25591f8d8de9fb5af1054325b5f27e262a04e6b420b99b6a70a not found: ID does not exist" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.353850 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424249 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.424557 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525918 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525937 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.525971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.527000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.529900 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.530720 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.531933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.532460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.546548 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"cinder-scheduler-0\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " pod="openstack/cinder-scheduler-0" Jan 30 16:43:56 crc kubenswrapper[4766]: I0130 16:43:56.672207 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:43:57 crc kubenswrapper[4766]: W0130 16:43:57.258679 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod063ebe65_0175_443e_8c75_5018c42b3f36.slice/crio-edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143 WatchSource:0}: Error finding container edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143: Status 404 returned error can't find the container with id edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143 Jan 30 16:43:57 crc kubenswrapper[4766]: I0130 16:43:57.261906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.061739 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24818215-6fcc-4a45-8f7c-4f65e993eb7d" path="/var/lib/kubelet/pods/24818215-6fcc-4a45-8f7c-4f65e993eb7d/volumes" Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.271247 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533"} Jan 30 16:43:58 crc kubenswrapper[4766]: I0130 16:43:58.271480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143"} Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.282954 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerStarted","Data":"a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49"} Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.311541 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.3115197419999998 podStartE2EDuration="3.311519742s" podCreationTimestamp="2026-01-30 16:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:43:59.301467532 +0000 UTC m=+1293.939424898" watchObservedRunningTime="2026-01-30 16:43:59.311519742 +0000 UTC m=+1293.949477088" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.377390 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.378516 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.385296 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6wwf9" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.385589 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.386030 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.392161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.485151 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490490 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490610 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.490770 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.551157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.592949 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593088 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.593205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.596322 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.601150 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.619872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.634812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"openstackclient\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " pod="openstack/openstackclient" Jan 30 16:43:59 crc kubenswrapper[4766]: I0130 16:43:59.725758 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:44:00 crc kubenswrapper[4766]: I0130 16:44:00.282580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:44:00 crc kubenswrapper[4766]: I0130 16:44:00.295882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"372f7d7a-9066-4b9b-884a-5257785ed101","Type":"ContainerStarted","Data":"b7b9378e6f0958ebc3c0de7dd982fb62b932e45e6c09c05227810636618c61d1"} Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.135170 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.136781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143795 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143845 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.143917 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.155697 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.225993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226193 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226294 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226341 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.226378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327754 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327800 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.327925 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.328218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.329624 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.334648 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.338985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.341323 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.345835 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"swift-proxy-7d7d659cc9-88mc9\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.462235 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.673728 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.757500 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760635 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" containerID="cri-o://17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760817 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" containerID="cri-o://05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760869 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" containerID="cri-o://93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.760905 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" containerID="cri-o://c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" gracePeriod=30 Jan 30 16:44:01 crc kubenswrapper[4766]: I0130 16:44:01.868418 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": read tcp 10.217.0.2:57172->10.217.0.157:3000: read: connection reset by peer" Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.118622 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:44:02 crc kubenswrapper[4766]: W0130 16:44:02.137236 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3997cdc_9abd_4aa3_9201_0015456d4750.slice/crio-49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae WatchSource:0}: Error finding container 49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae: Status 404 returned error can't find the container with id 49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.316462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319673 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" exitCode=0 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319741 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" exitCode=2 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319751 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" exitCode=0 Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319715 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319813 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} Jan 30 16:44:02 crc kubenswrapper[4766]: I0130 16:44:02.319830 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.330967 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerStarted","Data":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331910 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.331937 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:03 crc kubenswrapper[4766]: I0130 16:44:03.363352 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podStartSLOduration=2.363330817 podStartE2EDuration="2.363330817s" podCreationTimestamp="2026-01-30 16:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:03.351524161 +0000 UTC m=+1297.989481527" watchObservedRunningTime="2026-01-30 16:44:03.363330817 +0000 UTC m=+1298.001288163" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.229853 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359909 4766 generic.go:334] "Generic (PLEG): container finished" podID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" exitCode=0 Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632","Type":"ContainerDied","Data":"cd2c2b2506c59c114c23d0ceb86a25fba0633c14ce109f4881053f349d4e17dc"} Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.359992 4766 scope.go:117] "RemoveContainer" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.360239 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.365688 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366283 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366307 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366339 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") pod \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\" (UID: \"cbdbf5c1-c9c8-4dbd-9c68-f6067465e632\") " Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.366513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.367110 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.367140 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.372573 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d" (OuterVolumeSpecName: "kube-api-access-f2p4d") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "kube-api-access-f2p4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.384954 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts" (OuterVolumeSpecName: "scripts") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.395787 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.458419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468366 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468412 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2p4d\" (UniqueName: \"kubernetes.io/projected/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-kube-api-access-f2p4d\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468426 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.468434 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.491495 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data" (OuterVolumeSpecName: "config-data") pod "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" (UID: "cbdbf5c1-c9c8-4dbd-9c68-f6067465e632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.516234 4766 scope.go:117] "RemoveContainer" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.540056 4766 scope.go:117] "RemoveContainer" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.570391 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.570754 4766 scope.go:117] "RemoveContainer" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.588759 4766 scope.go:117] "RemoveContainer" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589083 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": container with ID starting with 05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301 not found: ID does not exist" containerID="05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589124 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301"} err="failed to get container status \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": rpc error: code = NotFound desc = could not find container \"05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301\": container with ID starting with 05048db503ef6992d3bc323c39e15291dcbc536c204e47706f4e99b6d8070301 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589153 4766 scope.go:117] "RemoveContainer" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589493 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": container with ID starting with 93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895 not found: ID does not exist" containerID="93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589584 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895"} err="failed to get container status \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": rpc error: code = NotFound desc = could not find container \"93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895\": container with ID starting with 93ae5a7f81e887b02a21081139212a780cdb5e34370af4b94974276fd478d895 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589662 4766 scope.go:117] "RemoveContainer" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.589913 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": container with ID starting with c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4 not found: ID does not exist" containerID="c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.589999 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4"} err="failed to get container status \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": rpc error: code = NotFound desc = could not find container \"c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4\": container with ID starting with c134999c83f4c4e99a82493ef600daf161d562a5ef3c81f4abd0f6a30ecfc2e4 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.590067 4766 scope.go:117] "RemoveContainer" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.590356 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": container with ID starting with 17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804 not found: ID does not exist" containerID="17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.590451 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804"} err="failed to get container status \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": rpc error: code = NotFound desc = could not find container \"17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804\": container with ID starting with 17bfefeda5ec998de43d5f88a70223bab321f691bb9c648b2910fa5820e7f804 not found: ID does not exist" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.697705 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.705571 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729851 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729869 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729890 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729897 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729917 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: E0130 16:44:06.729926 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.729932 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730078 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-central-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730095 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="proxy-httpd" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730107 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="sg-core" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.730123 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" containerName="ceilometer-notification-agent" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.731655 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.734870 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.736251 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.738446 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875740 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875782 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875931 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.875997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978708 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978830 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978892 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.978935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.980647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.982063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.982921 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:06 crc kubenswrapper[4766]: I0130 16:44:06.984484 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.004431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.004702 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.005775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"ceilometer-0\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " pod="openstack/ceilometer-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.150785 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:44:07 crc kubenswrapper[4766]: I0130 16:44:07.154550 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:08 crc kubenswrapper[4766]: I0130 16:44:08.057259 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdbf5c1-c9c8-4dbd-9c68-f6067465e632" path="/var/lib/kubelet/pods/cbdbf5c1-c9c8-4dbd-9c68-f6067465e632/volumes" Jan 30 16:44:08 crc kubenswrapper[4766]: I0130 16:44:08.597945 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:09 crc kubenswrapper[4766]: I0130 16:44:09.045511 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:44:09 crc kubenswrapper[4766]: I0130 16:44:09.045907 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.467219 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.467833 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:44:11 crc kubenswrapper[4766]: I0130 16:44:11.802742 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:13 crc kubenswrapper[4766]: I0130 16:44:13.567286 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.451667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.454162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"372f7d7a-9066-4b9b-884a-5257785ed101","Type":"ContainerStarted","Data":"df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.458668 4766 generic.go:334] "Generic (PLEG): container finished" podID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerID="9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" exitCode=137 Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.458716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b"} Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.589998 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.609001 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.497601463 podStartE2EDuration="15.608981288s" podCreationTimestamp="2026-01-30 16:43:59 +0000 UTC" firstStartedPulling="2026-01-30 16:44:00.284513019 +0000 UTC m=+1294.922470365" lastFinishedPulling="2026-01-30 16:44:13.395892844 +0000 UTC m=+1308.033850190" observedRunningTime="2026-01-30 16:44:14.47165316 +0000 UTC m=+1309.109610506" watchObservedRunningTime="2026-01-30 16:44:14.608981288 +0000 UTC m=+1309.246938634" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722090 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722126 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722348 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722405 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722730 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs" (OuterVolumeSpecName: "logs") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722884 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.722916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") pod \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\" (UID: \"87ea3ac4-577b-4c1d-bf9d-816ad975cce1\") " Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.723332 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.723360 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727301 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts" (OuterVolumeSpecName: "scripts") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727379 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.727815 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt" (OuterVolumeSpecName: "kube-api-access-9x2tt") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "kube-api-access-9x2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.752737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.782140 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data" (OuterVolumeSpecName: "config-data") pod "87ea3ac4-577b-4c1d-bf9d-816ad975cce1" (UID: "87ea3ac4-577b-4c1d-bf9d-816ad975cce1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824772 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824808 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824820 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824833 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:14 crc kubenswrapper[4766]: I0130 16:44:14.824846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2tt\" (UniqueName: \"kubernetes.io/projected/87ea3ac4-577b-4c1d-bf9d-816ad975cce1-kube-api-access-9x2tt\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.470867 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.470865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87ea3ac4-577b-4c1d-bf9d-816ad975cce1","Type":"ContainerDied","Data":"c8586f92647bbb5a114dcd6f6899c5036c3e271083fa860bf64d7866744bcc76"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.471367 4766 scope.go:117] "RemoveContainer" containerID="9b228d765a873cea41f2139537c23bbfc06db149fe1e44721d80abc73ff98c0b" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.473698 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.473747 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24"} Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.504304 4766 scope.go:117] "RemoveContainer" containerID="672ed2d0c3fa05620751134ad4ec14075e011d163f9d3075b0cc19ed389afb1c" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.526277 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.540283 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556004 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: E0130 16:44:15.556628 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556749 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: E0130 16:44:15.556807 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.556867 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.557194 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.557338 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" containerName="cinder-api-log" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.558536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564624 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.564575 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.584070 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638300 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638671 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638773 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638807 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.638834 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.740898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.740956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741048 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741156 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741237 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741910 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.741977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.745832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.750809 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.751363 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.752087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.761794 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.762227 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.767654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"cinder-api-0\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " pod="openstack/cinder-api-0" Jan 30 16:44:15 crc kubenswrapper[4766]: I0130 16:44:15.878787 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.079697 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ea3ac4-577b-4c1d-bf9d-816ad975cce1" path="/var/lib/kubelet/pods/87ea3ac4-577b-4c1d-bf9d-816ad975cce1/volumes" Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.270014 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.487568 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca"} Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.488462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"7e89f84a27af28de0ff96a206ea024d02e0721f6cc45b38d9fef889091b6e08b"} Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.819452 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.824346 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" containerID="cri-o://9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" gracePeriod=30 Jan 30 16:44:16 crc kubenswrapper[4766]: I0130 16:44:16.824577 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" containerID="cri-o://87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.246022 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.311943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.312162 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5995f74f66-6c62l" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" containerID="cri-o://f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.312631 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5995f74f66-6c62l" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" containerID="cri-o://6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.503537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.505075 4766 generic.go:334] "Generic (PLEG): container finished" podID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" exitCode=0 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.505127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.507846 4766 generic.go:334] "Generic (PLEG): container finished" podID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" exitCode=143 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.507876 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912308 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912775 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" containerID="cri-o://c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" gracePeriod=30 Jan 30 16:44:17 crc kubenswrapper[4766]: I0130 16:44:17.912905 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" containerID="cri-o://3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerStarted","Data":"a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519491 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" containerID="cri-o://a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519529 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" containerID="cri-o://abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.519568 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" containerID="cri-o://ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.521016 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" containerID="cri-o://0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" gracePeriod=30 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.521920 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerStarted","Data":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.522717 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.527120 4766 generic.go:334] "Generic (PLEG): container finished" podID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerID="c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" exitCode=143 Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.527203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a"} Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.551510 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.03078637 podStartE2EDuration="12.551489031s" podCreationTimestamp="2026-01-30 16:44:06 +0000 UTC" firstStartedPulling="2026-01-30 16:44:13.564986506 +0000 UTC m=+1308.202943852" lastFinishedPulling="2026-01-30 16:44:18.085689167 +0000 UTC m=+1312.723646513" observedRunningTime="2026-01-30 16:44:18.541198238 +0000 UTC m=+1313.179155594" watchObservedRunningTime="2026-01-30 16:44:18.551489031 +0000 UTC m=+1313.189446377" Jan 30 16:44:18 crc kubenswrapper[4766]: I0130 16:44:18.562520 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.562499814 podStartE2EDuration="3.562499814s" podCreationTimestamp="2026-01-30 16:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:18.558638288 +0000 UTC m=+1313.196595634" watchObservedRunningTime="2026-01-30 16:44:18.562499814 +0000 UTC m=+1313.200457160" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.378071 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517542 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517679 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517777 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.517856 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") pod \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\" (UID: \"41b169a2-8e44-4929-97b3-dbffe0cde1e3\") " Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.522945 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm" (OuterVolumeSpecName: "kube-api-access-p5bmm") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "kube-api-access-p5bmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.529590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550747 4766 generic.go:334] "Generic (PLEG): container finished" podID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550849 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5995f74f66-6c62l" event={"ID":"41b169a2-8e44-4929-97b3-dbffe0cde1e3","Type":"ContainerDied","Data":"e2b7b271b357b586463753be91e6e23e2c8d157467dd4ac8a1278aee093a63d3"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.550864 4766 scope.go:117] "RemoveContainer" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.551243 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5995f74f66-6c62l" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563607 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563636 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" exitCode=2 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.563644 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" exitCode=0 Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564502 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.564513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed"} Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.587369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config" (OuterVolumeSpecName: "config") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622540 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5bmm\" (UniqueName: \"kubernetes.io/projected/41b169a2-8e44-4929-97b3-dbffe0cde1e3-kube-api-access-p5bmm\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622894 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.622909 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.627385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.650009 4766 scope.go:117] "RemoveContainer" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.655408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41b169a2-8e44-4929-97b3-dbffe0cde1e3" (UID: "41b169a2-8e44-4929-97b3-dbffe0cde1e3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.682008 4766 scope.go:117] "RemoveContainer" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: E0130 16:44:19.685327 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": container with ID starting with 6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d not found: ID does not exist" containerID="6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.685381 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d"} err="failed to get container status \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": rpc error: code = NotFound desc = could not find container \"6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d\": container with ID starting with 6b2451aedacfbae9e7985dd3d8faea0afad8a36f5d5db78f320e3010f19ea30d not found: ID does not exist" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.685403 4766 scope.go:117] "RemoveContainer" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: E0130 16:44:19.696718 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": container with ID starting with f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64 not found: ID does not exist" containerID="f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.696802 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64"} err="failed to get container status \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": rpc error: code = NotFound desc = could not find container \"f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64\": container with ID starting with f90d990e4bb708269fa857e162e408508f66de376ebca39148001cfa19b15e64 not found: ID does not exist" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.726620 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.726806 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b169a2-8e44-4929-97b3-dbffe0cde1e3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.885111 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:19 crc kubenswrapper[4766]: I0130 16:44:19.897819 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5995f74f66-6c62l"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.073236 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" path="/var/lib/kubelet/pods/41b169a2-8e44-4929-97b3-dbffe0cde1e3/volumes" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.448663 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467744 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467791 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.467889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468223 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468235 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs" (OuterVolumeSpecName: "logs") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468370 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468412 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") pod \"64f88e91-eb62-45a5-bfcb-d38a918e23da\" (UID: \"64f88e91-eb62-45a5-bfcb-d38a918e23da\") " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468686 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.468701 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f88e91-eb62-45a5-bfcb-d38a918e23da-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.482554 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl" (OuterVolumeSpecName: "kube-api-access-q78hl") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "kube-api-access-q78hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.484661 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.501250 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts" (OuterVolumeSpecName: "scripts") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.534292 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.564244 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.569999 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570304 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q78hl\" (UniqueName: \"kubernetes.io/projected/64f88e91-eb62-45a5-bfcb-d38a918e23da-kube-api-access-q78hl\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570401 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570467 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.570523 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592483 4766 generic.go:334] "Generic (PLEG): container finished" podID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" exitCode=0 Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"64f88e91-eb62-45a5-bfcb-d38a918e23da","Type":"ContainerDied","Data":"935c723156bfbd5c9680c8c0177ab173e556ff98d5fd8edb1776d96225b947f7"} Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592797 4766 scope.go:117] "RemoveContainer" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.592949 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.610358 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.645494 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data" (OuterVolumeSpecName: "config-data") pod "64f88e91-eb62-45a5-bfcb-d38a918e23da" (UID: "64f88e91-eb62-45a5-bfcb-d38a918e23da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.651378 4766 scope.go:117] "RemoveContainer" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.670591 4766 scope.go:117] "RemoveContainer" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.671015 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": container with ID starting with 87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896 not found: ID does not exist" containerID="87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671045 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896"} err="failed to get container status \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": rpc error: code = NotFound desc = could not find container \"87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896\": container with ID starting with 87f92f11ae60bdb748400bbfd15a3c1c211bca6b2a6c5c076aef71b15e044896 not found: ID does not exist" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671066 4766 scope.go:117] "RemoveContainer" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671226 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f88e91-eb62-45a5-bfcb-d38a918e23da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.671243 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.672117 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": container with ID starting with 9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a not found: ID does not exist" containerID="9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.672139 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a"} err="failed to get container status \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": rpc error: code = NotFound desc = could not find container \"9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a\": container with ID starting with 9352e7173393de9492d5222999bb67029ca5e0dcf5693f98de32c79fb6c9bf8a not found: ID does not exist" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.932571 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.942493 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973409 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973838 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973859 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973897 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973911 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973920 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: E0130 16:44:20.973940 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.973947 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974258 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974294 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b169a2-8e44-4929-97b3-dbffe0cde1e3" containerName="neutron-api" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974369 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-httpd" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.974391 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" containerName="glance-log" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.975512 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.977745 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.988054 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:44:20 crc kubenswrapper[4766]: I0130 16:44:20.997835 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.114007 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.119418 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.162951 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177600 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177678 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177755 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177899 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.177963 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.235893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.237507 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.245641 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.248562 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.249776 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.263597 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.276315 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.279985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280163 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280211 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280272 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280311 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280339 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280369 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.280907 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.283652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.283716 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.292023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.295771 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.297786 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.327037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.360843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389542 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389640 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389668 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389703 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.389722 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.390569 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.391020 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.422417 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"nova-api-db-create-smswb\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.430274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.431377 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.440835 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.446768 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.447952 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.449945 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.455112 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.491970 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492000 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492058 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.492871 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.511303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"nova-cell0-db-create-pq28c\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.515327 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"nova-api-b00e-account-create-update-r7p4m\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.583156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.602890 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603928 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.603953 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.604896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.624023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.625526 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"nova-cell0-1273-account-create-update-d2bd4\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.631847 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.634200 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.639531 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"nova-cell1-db-create-8mgkl\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.639628 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.645924 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.664515 4766 generic.go:334] "Generic (PLEG): container finished" podID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerID="3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" exitCode=0 Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.664864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938"} Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.679729 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.705058 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.705116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.709189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.765575 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.789608 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.810757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.811421 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.845368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"nova-cell1-83af-account-create-update-87kzk\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.914873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.914953 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915038 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915248 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915302 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") pod \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\" (UID: \"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda\") " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.915513 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.916506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs" (OuterVolumeSpecName: "logs") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.926237 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m" (OuterVolumeSpecName: "kube-api-access-5t96m") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "kube-api-access-5t96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.926559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts" (OuterVolumeSpecName: "scripts") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937219 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937674 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937705 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937715 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937724 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t96m\" (UniqueName: \"kubernetes.io/projected/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-kube-api-access-5t96m\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.937753 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.961743 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.965163 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:44:21 crc kubenswrapper[4766]: I0130 16:44:21.979559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.008384 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.008781 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data" (OuterVolumeSpecName: "config-data") pod "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" (UID: "fdb8de08-c6c3-4dac-b9cc-0178d79a7eda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039097 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039137 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039150 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.039162 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.091504 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f88e91-eb62-45a5-bfcb-d38a918e23da" path="/var/lib/kubelet/pods/64f88e91-eb62-45a5-bfcb-d38a918e23da/volumes" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.341130 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.398285 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.529804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: W0130 16:44:22.551919 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d5b8a42_39dd_4b1b_9f92_1e3585b6707b.slice/crio-a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37 WatchSource:0}: Error finding container a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37: Status 404 returned error can't find the container with id a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37 Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.603943 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.626661 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.717332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerStarted","Data":"d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.717385 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerStarted","Data":"45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.721345 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.739583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerStarted","Data":"e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.742856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerStarted","Data":"40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.746833 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-smswb" podStartSLOduration=1.7468131900000001 podStartE2EDuration="1.74681319s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:22.735429786 +0000 UTC m=+1317.373387152" watchObservedRunningTime="2026-01-30 16:44:22.74681319 +0000 UTC m=+1317.384770536" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.748883 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.754575 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerStarted","Data":"3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdb8de08-c6c3-4dac-b9cc-0178d79a7eda","Type":"ContainerDied","Data":"323ddb58f9d31b5bc758e9920b4b5a6270bffb075aa3aec77b37c8af05f7ec01"} Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765205 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.765243 4766 scope.go:117] "RemoveContainer" containerID="3cb23532304b03e1da0f93a0cdcb7fa000cdddef8c5037121da270eaf943e938" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.771886 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b00e-account-create-update-r7p4m" podStartSLOduration=1.771870759 podStartE2EDuration="1.771870759s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:22.758004238 +0000 UTC m=+1317.395961584" watchObservedRunningTime="2026-01-30 16:44:22.771870759 +0000 UTC m=+1317.409828095" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.800907 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.802806 4766 scope.go:117] "RemoveContainer" containerID="c628aa6775fa8d17ac86f5683f6cf5c80fc38a33f4c92757b020af220822f50a" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.807765 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.850786 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: E0130 16:44:22.852645 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.852679 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: E0130 16:44:22.852724 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.852732 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.853318 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-log" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.853378 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" containerName="glance-httpd" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.868558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.875049 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.879023 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.879426 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.974249 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.993959 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.993999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994017 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994065 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:22 crc kubenswrapper[4766]: I0130 16:44:22.994160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.107129 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109506 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109528 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109593 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.109706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.112202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.107741 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.119714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.122708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.122936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.125098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.135504 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.151164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.166893 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.228889 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.785557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerStarted","Data":"9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.785953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerStarted","Data":"c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.790711 4766 generic.go:334] "Generic (PLEG): container finished" podID="cea24037-4775-49f8-8a3b-d194ea750544" containerID="d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.790773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerDied","Data":"d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.813484 4766 generic.go:334] "Generic (PLEG): container finished" podID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerID="894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.813580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerDied","Data":"894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.822096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerStarted","Data":"ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.822137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerStarted","Data":"334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.823602 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-83af-account-create-update-87kzk" podStartSLOduration=2.823591243 podStartE2EDuration="2.823591243s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:23.80968304 +0000 UTC m=+1318.447640386" watchObservedRunningTime="2026-01-30 16:44:23.823591243 +0000 UTC m=+1318.461548589" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.834305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerStarted","Data":"ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.843686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.848820 4766 generic.go:334] "Generic (PLEG): container finished" podID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerID="c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7" exitCode=0 Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.848935 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerDied","Data":"c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7"} Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.875451 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" podStartSLOduration=2.875425819 podStartE2EDuration="2.875425819s" podCreationTimestamp="2026-01-30 16:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:23.870495584 +0000 UTC m=+1318.508452930" watchObservedRunningTime="2026-01-30 16:44:23.875425819 +0000 UTC m=+1318.513383165" Jan 30 16:44:23 crc kubenswrapper[4766]: I0130 16:44:23.917376 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.057376 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb8de08-c6c3-4dac-b9cc-0178d79a7eda" path="/var/lib/kubelet/pods/fdb8de08-c6c3-4dac-b9cc-0178d79a7eda/volumes" Jan 30 16:44:24 crc kubenswrapper[4766]: W0130 16:44:24.070385 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc2931b_8439_4c5c_be4d_43f4aab528f2.slice/crio-2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b WatchSource:0}: Error finding container 2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b: Status 404 returned error can't find the container with id 2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.886914 4766 generic.go:334] "Generic (PLEG): container finished" podID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerID="ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.887013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerDied","Data":"ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.893138 4766 generic.go:334] "Generic (PLEG): container finished" podID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerID="ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.893304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerDied","Data":"ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.895692 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerStarted","Data":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.897544 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.897578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.909775 4766 generic.go:334] "Generic (PLEG): container finished" podID="98478911-5d75-4bba-a256-e1c2c28e56de" containerID="9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.909866 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerDied","Data":"9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.912777 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerID="0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" exitCode=0 Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913810 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e6f8d1d-5532-47c4-97db-68a1b5b3f876","Type":"ContainerDied","Data":"a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a"} Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.913827 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e5f15ece17462fa98655bf351efadbb053907815e9f63a9046768408f27c8a" Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.974349 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.974333122 podStartE2EDuration="4.974333122s" podCreationTimestamp="2026-01-30 16:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:24.972570643 +0000 UTC m=+1319.610527989" watchObservedRunningTime="2026-01-30 16:44:24.974333122 +0000 UTC m=+1319.612290468" Jan 30 16:44:24 crc kubenswrapper[4766]: I0130 16:44:24.981834 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062711 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062840 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062886 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.062921 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") pod \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\" (UID: \"9e6f8d1d-5532-47c4-97db-68a1b5b3f876\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.070488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.070959 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.076345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw" (OuterVolumeSpecName: "kube-api-access-jvtdw") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "kube-api-access-jvtdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.076430 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts" (OuterVolumeSpecName: "scripts") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.169249 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvtdw\" (UniqueName: \"kubernetes.io/projected/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-kube-api-access-jvtdw\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176346 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176729 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.176824 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.206288 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.238387 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.280546 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.280773 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.338293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data" (OuterVolumeSpecName: "config-data") pod "9e6f8d1d-5532-47c4-97db-68a1b5b3f876" (UID: "9e6f8d1d-5532-47c4-97db-68a1b5b3f876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.386025 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6f8d1d-5532-47c4-97db-68a1b5b3f876-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.445579 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.486928 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") pod \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.487037 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") pod \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\" (UID: \"574fc4f9-56c3-44bf-bb85-26bb97a23ddc\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.495409 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn" (OuterVolumeSpecName: "kube-api-access-dlzjn") pod "574fc4f9-56c3-44bf-bb85-26bb97a23ddc" (UID: "574fc4f9-56c3-44bf-bb85-26bb97a23ddc"). InnerVolumeSpecName "kube-api-access-dlzjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.499316 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "574fc4f9-56c3-44bf-bb85-26bb97a23ddc" (UID: "574fc4f9-56c3-44bf-bb85-26bb97a23ddc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.592353 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.592378 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzjn\" (UniqueName: \"kubernetes.io/projected/574fc4f9-56c3-44bf-bb85-26bb97a23ddc-kube-api-access-dlzjn\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.633216 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.642731 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.693397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") pod \"cea24037-4775-49f8-8a3b-d194ea750544\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694190 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") pod \"d707ae8a-f650-48e3-87e8-dc79076433e4\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694256 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") pod \"d707ae8a-f650-48e3-87e8-dc79076433e4\" (UID: \"d707ae8a-f650-48e3-87e8-dc79076433e4\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.694277 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") pod \"cea24037-4775-49f8-8a3b-d194ea750544\" (UID: \"cea24037-4775-49f8-8a3b-d194ea750544\") " Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.695227 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d707ae8a-f650-48e3-87e8-dc79076433e4" (UID: "d707ae8a-f650-48e3-87e8-dc79076433e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.695633 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d707ae8a-f650-48e3-87e8-dc79076433e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.696150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cea24037-4775-49f8-8a3b-d194ea750544" (UID: "cea24037-4775-49f8-8a3b-d194ea750544"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.700718 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv" (OuterVolumeSpecName: "kube-api-access-7mmpv") pod "cea24037-4775-49f8-8a3b-d194ea750544" (UID: "cea24037-4775-49f8-8a3b-d194ea750544"). InnerVolumeSpecName "kube-api-access-7mmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.700910 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk" (OuterVolumeSpecName: "kube-api-access-lpwqk") pod "d707ae8a-f650-48e3-87e8-dc79076433e4" (UID: "d707ae8a-f650-48e3-87e8-dc79076433e4"). InnerVolumeSpecName "kube-api-access-lpwqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.797792 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpwqk\" (UniqueName: \"kubernetes.io/projected/d707ae8a-f650-48e3-87e8-dc79076433e4-kube-api-access-lpwqk\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.798136 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea24037-4775-49f8-8a3b-d194ea750544-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.798151 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmpv\" (UniqueName: \"kubernetes.io/projected/cea24037-4775-49f8-8a3b-d194ea750544-kube-api-access-7mmpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.928527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pq28c" event={"ID":"d707ae8a-f650-48e3-87e8-dc79076433e4","Type":"ContainerDied","Data":"e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.929787 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d5e3a423c014f40e96b177e972dc5cff17fe4bb117654eaa11b3e1ea2eb5e4" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.929948 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pq28c" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.933737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerStarted","Data":"7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8mgkl" event={"ID":"574fc4f9-56c3-44bf-bb85-26bb97a23ddc","Type":"ContainerDied","Data":"3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940757 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb7c13be781ce5d3b078694b8badbe417819385de26cb3b0df7b2d9025fad6e" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.940811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8mgkl" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.957452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smswb" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.963933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smswb" event={"ID":"cea24037-4775-49f8-8a3b-d194ea750544","Type":"ContainerDied","Data":"45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429"} Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.968617 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45e141adfd656f2833367fd8aeb9a9701e7d26dcc680c32948849f3fdcd2f429" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.966634 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:25 crc kubenswrapper[4766]: I0130 16:44:25.971760 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.971725371 podStartE2EDuration="3.971725371s" podCreationTimestamp="2026-01-30 16:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:44:25.96623106 +0000 UTC m=+1320.604188426" watchObservedRunningTime="2026-01-30 16:44:25.971725371 +0000 UTC m=+1320.609682717" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.070695 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.078218 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.090235 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.090991 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091063 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091121 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091200 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091286 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091413 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091478 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091527 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091593 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091645 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091708 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091761 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: E0130 16:44:26.091823 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.091876 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092082 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="sg-core" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092146 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="proxy-httpd" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092250 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092324 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-central-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092396 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092464 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" containerName="ceilometer-notification-agent" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.092520 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea24037-4775-49f8-8a3b-d194ea750544" containerName="mariadb-database-create" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.094215 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.099355 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.100090 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.118688 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206283 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.206994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.207121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309408 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.309647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.310282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.310755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.315515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.317865 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.325312 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.331090 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.332265 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"ceilometer-0\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.456199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.616457 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.637992 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.641120 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.663726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") pod \"98478911-5d75-4bba-a256-e1c2c28e56de\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.663827 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") pod \"98478911-5d75-4bba-a256-e1c2c28e56de\" (UID: \"98478911-5d75-4bba-a256-e1c2c28e56de\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.665478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98478911-5d75-4bba-a256-e1c2c28e56de" (UID: "98478911-5d75-4bba-a256-e1c2c28e56de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.679386 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp" (OuterVolumeSpecName: "kube-api-access-75wfp") pod "98478911-5d75-4bba-a256-e1c2c28e56de" (UID: "98478911-5d75-4bba-a256-e1c2c28e56de"). InnerVolumeSpecName "kube-api-access-75wfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") pod \"0c69ac66-232c-41b5-95a8-66eeb597bf70\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") pod \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767680 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") pod \"0c69ac66-232c-41b5-95a8-66eeb597bf70\" (UID: \"0c69ac66-232c-41b5-95a8-66eeb597bf70\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.767725 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") pod \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\" (UID: \"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d\") " Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.768282 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75wfp\" (UniqueName: \"kubernetes.io/projected/98478911-5d75-4bba-a256-e1c2c28e56de-kube-api-access-75wfp\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.768308 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98478911-5d75-4bba-a256-e1c2c28e56de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.770153 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" (UID: "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.770765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c69ac66-232c-41b5-95a8-66eeb597bf70" (UID: "0c69ac66-232c-41b5-95a8-66eeb597bf70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.774478 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt" (OuterVolumeSpecName: "kube-api-access-bjsdt") pod "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" (UID: "0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d"). InnerVolumeSpecName "kube-api-access-bjsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.783284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc" (OuterVolumeSpecName: "kube-api-access-jlvzc") pod "0c69ac66-232c-41b5-95a8-66eeb597bf70" (UID: "0c69ac66-232c-41b5-95a8-66eeb597bf70"). InnerVolumeSpecName "kube-api-access-jlvzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869577 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlvzc\" (UniqueName: \"kubernetes.io/projected/0c69ac66-232c-41b5-95a8-66eeb597bf70-kube-api-access-jlvzc\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869614 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869624 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c69ac66-232c-41b5-95a8-66eeb597bf70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.869632 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjsdt\" (UniqueName: \"kubernetes.io/projected/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d-kube-api-access-bjsdt\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973780 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83af-account-create-update-87kzk" event={"ID":"98478911-5d75-4bba-a256-e1c2c28e56de","Type":"ContainerDied","Data":"c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973842 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cc24a1b2be73c7dd0072b1a89bb90e958b2833e86ee694006d7eee9e3c395e" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.973790 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83af-account-create-update-87kzk" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.976907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-r7p4m" event={"ID":"0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d","Type":"ContainerDied","Data":"40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.976980 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40233374d2b83e45828fdfde099831302925232fe79bde3b2bea863dce7854c1" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.977073 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-r7p4m" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993657 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993876 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-d2bd4" event={"ID":"0c69ac66-232c-41b5-95a8-66eeb597bf70","Type":"ContainerDied","Data":"334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe"} Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993928 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334bd2587d275c4f6e18823ddbfefa781776489d5ac69fe7932fc5178e4e33fe" Jan 30 16:44:26 crc kubenswrapper[4766]: I0130 16:44:26.993955 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:27 crc kubenswrapper[4766]: I0130 16:44:27.012800 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.002530 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"9b528af22b1b5581dbc2a01e256cf97cec5bfd26af827ddc74d5e4d0a050df47"} Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.052042 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6f8d1d-5532-47c4-97db-68a1b5b3f876" path="/var/lib/kubelet/pods/9e6f8d1d-5532-47c4-97db-68a1b5b3f876/volumes" Jan 30 16:44:28 crc kubenswrapper[4766]: I0130 16:44:28.208891 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:44:30 crc kubenswrapper[4766]: I0130 16:44:30.025869 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150"} Jan 30 16:44:30 crc kubenswrapper[4766]: I0130 16:44:30.026496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59"} Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.603972 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.604276 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.634613 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.653735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.905927 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906321 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906339 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906353 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906360 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: E0130 16:44:31.906380 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906386 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906566 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906594 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.906603 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" containerName="mariadb-account-create-update" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.907143 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.909638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5t29t" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.909839 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.911770 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.926467 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.967992 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968197 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:31 crc kubenswrapper[4766]: I0130 16:44:31.968327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112"} Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.050738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069844 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.069885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.070102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.074903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.077610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.090768 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.093801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"nova-cell0-conductor-db-sync-xsc6g\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.226423 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:44:32 crc kubenswrapper[4766]: I0130 16:44:32.697383 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.055113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerStarted","Data":"de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6"} Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.230097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.230156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.270345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:33 crc kubenswrapper[4766]: I0130 16:44:33.281911 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.064245 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.064558 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.066047 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.066077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.192191 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:44:34 crc kubenswrapper[4766]: I0130 16:44:34.196189 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.075405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerStarted","Data":"8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d"} Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.076349 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:44:35 crc kubenswrapper[4766]: I0130 16:44:35.108053 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6450019839999999 podStartE2EDuration="9.108027953s" podCreationTimestamp="2026-01-30 16:44:26 +0000 UTC" firstStartedPulling="2026-01-30 16:44:27.012585806 +0000 UTC m=+1321.650543152" lastFinishedPulling="2026-01-30 16:44:34.475611775 +0000 UTC m=+1329.113569121" observedRunningTime="2026-01-30 16:44:35.103067227 +0000 UTC m=+1329.741024573" watchObservedRunningTime="2026-01-30 16:44:35.108027953 +0000 UTC m=+1329.745985309" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.086272 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.086312 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.228414 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:36 crc kubenswrapper[4766]: I0130 16:44:36.317466 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045304 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045668 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.045706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.046232 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:44:39 crc kubenswrapper[4766]: I0130 16:44:39.046286 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" gracePeriod=600 Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120709 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" exitCode=0 Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba"} Jan 30 16:44:40 crc kubenswrapper[4766]: I0130 16:44:40.120808 4766 scope.go:117] "RemoveContainer" containerID="ff8a362ea851503bbb575c0aae10eba4412530904ed767a62c62bad94b884ce0" Jan 30 16:44:41 crc kubenswrapper[4766]: I0130 16:44:41.130578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerStarted","Data":"53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab"} Jan 30 16:44:41 crc kubenswrapper[4766]: I0130 16:44:41.132634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.059350 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.059975 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" containerID="cri-o://b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060084 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" containerID="cri-o://0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060110 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" containerID="cri-o://76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.060483 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" containerID="cri-o://8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" gracePeriod=30 Jan 30 16:44:42 crc kubenswrapper[4766]: I0130 16:44:42.158611 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" podStartSLOduration=3.12062016 podStartE2EDuration="11.158596795s" podCreationTimestamp="2026-01-30 16:44:31 +0000 UTC" firstStartedPulling="2026-01-30 16:44:32.701028464 +0000 UTC m=+1327.338985820" lastFinishedPulling="2026-01-30 16:44:40.739005109 +0000 UTC m=+1335.376962455" observedRunningTime="2026-01-30 16:44:42.150982375 +0000 UTC m=+1336.788939741" watchObservedRunningTime="2026-01-30 16:44:42.158596795 +0000 UTC m=+1336.796554141" Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.150898 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" exitCode=0 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151408 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" exitCode=2 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151417 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" exitCode=0 Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.150988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d"} Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112"} Jan 30 16:44:43 crc kubenswrapper[4766]: I0130 16:44:43.151466 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59"} Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.175373 4766 generic.go:334] "Generic (PLEG): container finished" podID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerID="76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" exitCode=0 Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.175461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150"} Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.619513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719835 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.719914 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720145 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") pod \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\" (UID: \"d1d1e402-7f4e-4c9e-9831-0a5d14616fde\") " Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.720737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.721452 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.721479 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.727306 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm" (OuterVolumeSpecName: "kube-api-access-wqnpm") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "kube-api-access-wqnpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.728599 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts" (OuterVolumeSpecName: "scripts") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.761915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.800159 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.822865 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823240 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823335 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqnpm\" (UniqueName: \"kubernetes.io/projected/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-kube-api-access-wqnpm\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.823443 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.824817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data" (OuterVolumeSpecName: "config-data") pod "d1d1e402-7f4e-4c9e-9831-0a5d14616fde" (UID: "d1d1e402-7f4e-4c9e-9831-0a5d14616fde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:44:44 crc kubenswrapper[4766]: I0130 16:44:44.925336 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d1e402-7f4e-4c9e-9831-0a5d14616fde-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.192716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1d1e402-7f4e-4c9e-9831-0a5d14616fde","Type":"ContainerDied","Data":"9b528af22b1b5581dbc2a01e256cf97cec5bfd26af827ddc74d5e4d0a050df47"} Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.192826 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.193222 4766 scope.go:117] "RemoveContainer" containerID="8ccac3aa0a587d70b3197d39e4a424c5d3c4b97bb45f69730f92ad4056adf33d" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.218315 4766 scope.go:117] "RemoveContainer" containerID="0cc930c24dd0e619bf1c708ae90cfe124b8542a4f7be4495b512b8f0f80d9112" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.248385 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.265547 4766 scope.go:117] "RemoveContainer" containerID="76c88c6567a93336687e962e7d2517bf67cd4cf174d2091c90be59d55a672150" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.280683 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.290138 4766 scope.go:117] "RemoveContainer" containerID="b285e8e69d7ab02b0bfae305890b5a29b3d4f19eea785d5a2b4ad8f1c688ad59" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.290381 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291538 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291588 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291631 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291641 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291656 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291666 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: E0130 16:44:45.291681 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.291688 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292011 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="sg-core" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292040 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-central-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292056 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="proxy-httpd" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.292073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" containerName="ceilometer-notification-agent" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.294216 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.298580 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.299254 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.300305 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.456935 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457251 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457589 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.457916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559483 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559570 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559591 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559628 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559687 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.559724 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.560070 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.560421 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.563559 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.567905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.567934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.569665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.579236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"ceilometer-0\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " pod="openstack/ceilometer-0" Jan 30 16:44:45 crc kubenswrapper[4766]: I0130 16:44:45.638514 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:44:46 crc kubenswrapper[4766]: I0130 16:44:46.050744 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d1e402-7f4e-4c9e-9831-0a5d14616fde" path="/var/lib/kubelet/pods/d1d1e402-7f4e-4c9e-9831-0a5d14616fde/volumes" Jan 30 16:44:46 crc kubenswrapper[4766]: I0130 16:44:46.300694 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:47 crc kubenswrapper[4766]: I0130 16:44:47.210461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a"} Jan 30 16:44:47 crc kubenswrapper[4766]: I0130 16:44:47.210786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"61a267512b3a74db1d89e0f87c3f2b0cc5973c3838b369b646d6b0db83c2aa4a"} Jan 30 16:44:48 crc kubenswrapper[4766]: I0130 16:44:48.225782 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c"} Jan 30 16:44:48 crc kubenswrapper[4766]: I0130 16:44:48.436803 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:44:49 crc kubenswrapper[4766]: I0130 16:44:49.247408 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1"} Jan 30 16:44:55 crc kubenswrapper[4766]: I0130 16:44:55.313658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerStarted","Data":"84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591"} Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" containerID="cri-o://13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320741 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" containerID="cri-o://0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320762 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" containerID="cri-o://555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.320810 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" containerID="cri-o://84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" gracePeriod=30 Jan 30 16:44:56 crc kubenswrapper[4766]: I0130 16:44:56.359870 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.48163305 podStartE2EDuration="11.35982763s" podCreationTimestamp="2026-01-30 16:44:45 +0000 UTC" firstStartedPulling="2026-01-30 16:44:46.304528495 +0000 UTC m=+1340.942485841" lastFinishedPulling="2026-01-30 16:44:54.182723075 +0000 UTC m=+1348.820680421" observedRunningTime="2026-01-30 16:44:56.349190427 +0000 UTC m=+1350.987147773" watchObservedRunningTime="2026-01-30 16:44:56.35982763 +0000 UTC m=+1350.997784976" Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337016 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" exitCode=0 Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337056 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" exitCode=2 Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337079 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591"} Jan 30 16:44:57 crc kubenswrapper[4766]: I0130 16:44:57.337108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1"} Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.142215 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.143862 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.150216 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.150485 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.158800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.258672 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.360660 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.361698 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.368056 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.378540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"collect-profiles-29496525-bphwz\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.462376 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:00 crc kubenswrapper[4766]: I0130 16:45:00.885685 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.374970 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" exitCode=0 Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.375055 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c"} Jan 30 16:45:01 crc kubenswrapper[4766]: I0130 16:45:01.377588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerStarted","Data":"451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1"} Jan 30 16:45:02 crc kubenswrapper[4766]: I0130 16:45:02.386221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerStarted","Data":"8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2"} Jan 30 16:45:02 crc kubenswrapper[4766]: I0130 16:45:02.401517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" podStartSLOduration=2.401502994 podStartE2EDuration="2.401502994s" podCreationTimestamp="2026-01-30 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:02.399732425 +0000 UTC m=+1357.037689791" watchObservedRunningTime="2026-01-30 16:45:02.401502994 +0000 UTC m=+1357.039460340" Jan 30 16:45:04 crc kubenswrapper[4766]: I0130 16:45:04.411246 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerID="8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2" exitCode=0 Jan 30 16:45:04 crc kubenswrapper[4766]: I0130 16:45:04.411379 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerDied","Data":"8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2"} Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.830228 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.959647 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") pod \"ae50e63c-8d14-4773-85f7-1deaaee40da6\" (UID: \"ae50e63c-8d14-4773-85f7-1deaaee40da6\") " Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.961464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.966839 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:05 crc kubenswrapper[4766]: I0130 16:45:05.967506 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68" (OuterVolumeSpecName: "kube-api-access-zsh68") pod "ae50e63c-8d14-4773-85f7-1deaaee40da6" (UID: "ae50e63c-8d14-4773-85f7-1deaaee40da6"). InnerVolumeSpecName "kube-api-access-zsh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062044 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae50e63c-8d14-4773-85f7-1deaaee40da6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062082 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae50e63c-8d14-4773-85f7-1deaaee40da6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.062093 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsh68\" (UniqueName: \"kubernetes.io/projected/ae50e63c-8d14-4773-85f7-1deaaee40da6-kube-api-access-zsh68\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" event={"ID":"ae50e63c-8d14-4773-85f7-1deaaee40da6","Type":"ContainerDied","Data":"451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1"} Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428765 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.428781 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451689aa105db363115bdf472e856a43d1bc5d29077b40817c715c822208a7f1" Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.434055 4766 generic.go:334] "Generic (PLEG): container finished" podID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerID="13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" exitCode=0 Jan 30 16:45:06 crc kubenswrapper[4766]: I0130 16:45:06.434088 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a"} Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.194013 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.204873 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.204980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205169 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205668 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.205817 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.249358 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.286917 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306362 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306403 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306425 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") pod \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\" (UID: \"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08\") " Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.306746 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307030 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307049 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307061 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.307794 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data" (OuterVolumeSpecName: "config-data") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.310348 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts" (OuterVolumeSpecName: "scripts") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.311451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4" (OuterVolumeSpecName: "kube-api-access-t27b4") pod "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" (UID: "90b7f3b2-6f0a-441b-8ff3-a09b5c453a08"). InnerVolumeSpecName "kube-api-access-t27b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408501 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408542 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27b4\" (UniqueName: \"kubernetes.io/projected/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-kube-api-access-t27b4\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.408552 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90b7f3b2-6f0a-441b-8ff3-a09b5c453a08","Type":"ContainerDied","Data":"61a267512b3a74db1d89e0f87c3f2b0cc5973c3838b369b646d6b0db83c2aa4a"} Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459552 4766 scope.go:117] "RemoveContainer" containerID="84d8da172448129956d93ef1d07772a89a79900849f061edf1f9286dfa4bb591" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.459262 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.489923 4766 scope.go:117] "RemoveContainer" containerID="0c78464fc87a4f06711694310fdc641ab69421eec8cc23d2052721654b1114c1" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.496116 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.506223 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.522914 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523492 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523503 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523515 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523521 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523539 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523546 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523560 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523566 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: E0130 16:45:08.523579 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523585 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523740 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-notification-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523751 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="sg-core" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523762 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="proxy-httpd" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523773 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" containerName="ceilometer-central-agent" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.523782 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" containerName="collect-profiles" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.525243 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.530310 4766 scope.go:117] "RemoveContainer" containerID="555282210fbcf9d58864febf2b6688957bfaeee8bcfeba4a5957116d8831663c" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.530453 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.531754 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.537099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.560524 4766 scope.go:117] "RemoveContainer" containerID="13122a4eabb8082652f9569e2e13ff1ecd84b7d291ad1c35b8176811386f299a" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.616971 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617120 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617156 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.617493 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719208 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719608 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.719853 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.724746 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.725850 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.741822 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"ceilometer-0\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " pod="openstack/ceilometer-0" Jan 30 16:45:08 crc kubenswrapper[4766]: I0130 16:45:08.849997 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:09 crc kubenswrapper[4766]: I0130 16:45:09.305562 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:09 crc kubenswrapper[4766]: I0130 16:45:09.468412 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe"} Jan 30 16:45:10 crc kubenswrapper[4766]: I0130 16:45:10.049393 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b7f3b2-6f0a-441b-8ff3-a09b5c453a08" path="/var/lib/kubelet/pods/90b7f3b2-6f0a-441b-8ff3-a09b5c453a08/volumes" Jan 30 16:45:10 crc kubenswrapper[4766]: I0130 16:45:10.477877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} Jan 30 16:45:11 crc kubenswrapper[4766]: I0130 16:45:11.488091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} Jan 30 16:45:11 crc kubenswrapper[4766]: I0130 16:45:11.488744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.521878 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerStarted","Data":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.522547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:45:15 crc kubenswrapper[4766]: I0130 16:45:15.551710 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.459936242 podStartE2EDuration="7.551691873s" podCreationTimestamp="2026-01-30 16:45:08 +0000 UTC" firstStartedPulling="2026-01-30 16:45:09.311847627 +0000 UTC m=+1363.949804973" lastFinishedPulling="2026-01-30 16:45:14.403603258 +0000 UTC m=+1369.041560604" observedRunningTime="2026-01-30 16:45:15.546612403 +0000 UTC m=+1370.184569749" watchObservedRunningTime="2026-01-30 16:45:15.551691873 +0000 UTC m=+1370.189649219" Jan 30 16:45:21 crc kubenswrapper[4766]: I0130 16:45:21.572739 4766 generic.go:334] "Generic (PLEG): container finished" podID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerID="53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab" exitCode=0 Jan 30 16:45:21 crc kubenswrapper[4766]: I0130 16:45:21.572861 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerDied","Data":"53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab"} Jan 30 16:45:22 crc kubenswrapper[4766]: I0130 16:45:22.930680 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.002784 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003225 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.003396 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") pod \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\" (UID: \"b88e4495-e013-4fc2-b65b-c3d914b89dd8\") " Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.016450 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts" (OuterVolumeSpecName: "scripts") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.016605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6" (OuterVolumeSpecName: "kube-api-access-thmr6") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "kube-api-access-thmr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.026813 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data" (OuterVolumeSpecName: "config-data") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.028582 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b88e4495-e013-4fc2-b65b-c3d914b89dd8" (UID: "b88e4495-e013-4fc2-b65b-c3d914b89dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106466 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106527 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106539 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmr6\" (UniqueName: \"kubernetes.io/projected/b88e4495-e013-4fc2-b65b-c3d914b89dd8-kube-api-access-thmr6\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.106550 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b88e4495-e013-4fc2-b65b-c3d914b89dd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" event={"ID":"b88e4495-e013-4fc2-b65b-c3d914b89dd8","Type":"ContainerDied","Data":"de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6"} Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591673 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de33c59a496f86bc7326b1527b7dc3b9a3d5c593c7c83837b47d719057a9c4e6" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.591431 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsc6g" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701043 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:23 crc kubenswrapper[4766]: E0130 16:45:23.701571 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701597 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.701827 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" containerName="nova-cell0-conductor-db-sync" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.702660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.707563 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.708064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5t29t" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.712735 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716073 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.716313 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818408 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.818630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.825063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.829169 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:23 crc kubenswrapper[4766]: I0130 16:45:23.839431 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"nova-cell0-conductor-0\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.024720 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.479271 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:45:24 crc kubenswrapper[4766]: W0130 16:45:24.483051 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5346df4_67e7_4a20_bb56_11173908a334.slice/crio-33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09 WatchSource:0}: Error finding container 33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09: Status 404 returned error can't find the container with id 33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09 Jan 30 16:45:24 crc kubenswrapper[4766]: I0130 16:45:24.601402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerStarted","Data":"33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09"} Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.611128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerStarted","Data":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.611454 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:25 crc kubenswrapper[4766]: I0130 16:45:25.635070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.635049661 podStartE2EDuration="2.635049661s" podCreationTimestamp="2026-01-30 16:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:25.628445149 +0000 UTC m=+1380.266402495" watchObservedRunningTime="2026-01-30 16:45:25.635049661 +0000 UTC m=+1380.273007007" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.050856 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.484253 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.485657 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.493460 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.496072 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.497971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.532998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533080 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.533138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635955 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.635984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.641919 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.644881 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.656236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.664215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"nova-cell0-cell-mapping-2sfxl\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.693247 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.694962 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.698081 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.734959 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739022 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739081 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739144 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.739238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.779019 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.780629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.785316 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.796795 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.815312 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840575 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840645 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840686 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840720 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.840749 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.841540 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.846146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.847810 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.865017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"nova-api-0\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " pod="openstack/nova-api-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.895443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.896738 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.899417 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.914163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945714 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945797 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.945869 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.958952 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:29 crc kubenswrapper[4766]: I0130 16:45:29.959422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.004119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"nova-scheduler-0\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.014928 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.017321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.020878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048205 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048272 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048494 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048527 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048586 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.048624 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.050762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.095413 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.098799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.102213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.109797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.134879 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151213 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.151373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.152272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.179687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.215621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.217689 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.222338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.230225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"nova-metadata-0\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.248722 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.257235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.257543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.258018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259496 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.259661 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364142 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364315 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364353 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.364396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.365475 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.374606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.376905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.379376 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.389134 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.419304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"dnsmasq-dns-845d6d6f59-lx7hm\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.434699 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.590272 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:45:30 crc kubenswrapper[4766]: W0130 16:45:30.595908 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7639b60e_a348_4203_84b6_68af413cd517.slice/crio-a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855 WatchSource:0}: Error finding container a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855: Status 404 returned error can't find the container with id a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855 Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.627881 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.674322 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerStarted","Data":"a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855"} Jan 30 16:45:30 crc kubenswrapper[4766]: I0130 16:45:30.943427 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.042326 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.043491 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.046750 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.050211 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.073333 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.079607 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.100016 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.155506 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.182984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.183027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.192039 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.192125 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.193414 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.201071 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"nova-cell1-conductor-db-sync-d5p85\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.257162 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:31 crc kubenswrapper[4766]: W0130 16:45:31.267102 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode111c80e_0c45_49f2_bfc0_665fbdd2ac56.slice/crio-7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d WatchSource:0}: Error finding container 7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d: Status 404 returned error can't find the container with id 7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d Jan 30 16:45:31 crc kubenswrapper[4766]: W0130 16:45:31.323807 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92d5f78_a271_41e7_bde9_410e3db6ee58.slice/crio-91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc WatchSource:0}: Error finding container 91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc: Status 404 returned error can't find the container with id 91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.326386 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.382530 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.684761 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerStarted","Data":"66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.686101 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.686147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.687744 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerStarted","Data":"e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.698404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerStarted","Data":"d03cdc6170eebcf6ba04199860083b79a704186bcc24a8f0c94fb427aa1473a0"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.704270 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2sfxl" podStartSLOduration=2.7042499429999998 podStartE2EDuration="2.704249943s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:31.701674472 +0000 UTC m=+1386.339631818" watchObservedRunningTime="2026-01-30 16:45:31.704249943 +0000 UTC m=+1386.342207299" Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.708664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.710251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d"} Jan 30 16:45:31 crc kubenswrapper[4766]: I0130 16:45:31.841441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.741775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerStarted","Data":"244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.742089 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerStarted","Data":"d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.744707 4766 generic.go:334] "Generic (PLEG): container finished" podID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerID="1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532" exitCode=0 Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.744794 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532"} Jan 30 16:45:32 crc kubenswrapper[4766]: I0130 16:45:32.761925 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-d5p85" podStartSLOduration=1.7618991400000001 podStartE2EDuration="1.76189914s" podCreationTimestamp="2026-01-30 16:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:32.754569528 +0000 UTC m=+1387.392526894" watchObservedRunningTime="2026-01-30 16:45:32.76189914 +0000 UTC m=+1387.399856486" Jan 30 16:45:34 crc kubenswrapper[4766]: I0130 16:45:34.167890 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:45:34 crc kubenswrapper[4766]: I0130 16:45:34.235903 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.783997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.784344 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerStarted","Data":"c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.784130 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" containerID="cri-o://6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.785763 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" containerID="cri-o://c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.789106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerStarted","Data":"89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.789252 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.796241 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerStarted","Data":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.800430 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" gracePeriod=30 Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.801158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerStarted","Data":"135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.806986 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.975874226 podStartE2EDuration="6.806968004s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.274374127 +0000 UTC m=+1385.912331463" lastFinishedPulling="2026-01-30 16:45:35.105467895 +0000 UTC m=+1389.743425241" observedRunningTime="2026-01-30 16:45:35.805671208 +0000 UTC m=+1390.443628564" watchObservedRunningTime="2026-01-30 16:45:35.806968004 +0000 UTC m=+1390.444925360" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.808832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.808984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerStarted","Data":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.834056 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6902316170000002 podStartE2EDuration="6.834037269s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:30.954092274 +0000 UTC m=+1385.592049620" lastFinishedPulling="2026-01-30 16:45:35.097897926 +0000 UTC m=+1389.735855272" observedRunningTime="2026-01-30 16:45:35.823085027 +0000 UTC m=+1390.461042383" watchObservedRunningTime="2026-01-30 16:45:35.834037269 +0000 UTC m=+1390.471994615" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.848917 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" podStartSLOduration=5.848895438 podStartE2EDuration="5.848895438s" podCreationTimestamp="2026-01-30 16:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:35.840442895 +0000 UTC m=+1390.478400251" watchObservedRunningTime="2026-01-30 16:45:35.848895438 +0000 UTC m=+1390.486852784" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.868400 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.907164185 podStartE2EDuration="6.868378243s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.13708637 +0000 UTC m=+1385.775043716" lastFinishedPulling="2026-01-30 16:45:35.098300428 +0000 UTC m=+1389.736257774" observedRunningTime="2026-01-30 16:45:35.865277628 +0000 UTC m=+1390.503235004" watchObservedRunningTime="2026-01-30 16:45:35.868378243 +0000 UTC m=+1390.506335589" Jan 30 16:45:35 crc kubenswrapper[4766]: I0130 16:45:35.888911 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.952517554 podStartE2EDuration="6.888887588s" podCreationTimestamp="2026-01-30 16:45:29 +0000 UTC" firstStartedPulling="2026-01-30 16:45:31.16036313 +0000 UTC m=+1385.798320476" lastFinishedPulling="2026-01-30 16:45:35.096733164 +0000 UTC m=+1389.734690510" observedRunningTime="2026-01-30 16:45:35.884700942 +0000 UTC m=+1390.522658298" watchObservedRunningTime="2026-01-30 16:45:35.888887588 +0000 UTC m=+1390.526844954" Jan 30 16:45:36 crc kubenswrapper[4766]: I0130 16:45:36.819410 4766 generic.go:334] "Generic (PLEG): container finished" podID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerID="c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" exitCode=143 Jan 30 16:45:36 crc kubenswrapper[4766]: I0130 16:45:36.819551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b"} Jan 30 16:45:38 crc kubenswrapper[4766]: I0130 16:45:38.857738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:45:39 crc kubenswrapper[4766]: I0130 16:45:39.847715 4766 generic.go:334] "Generic (PLEG): container finished" podID="7639b60e-a348-4203-84b6-68af413cd517" containerID="66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27" exitCode=0 Jan 30 16:45:39 crc kubenswrapper[4766]: I0130 16:45:39.847762 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerDied","Data":"66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.052027 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.052092 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.116202 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.116258 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.142977 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.366306 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.435817 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.435860 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.630380 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.731440 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.735466 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-689xd" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" containerID="cri-o://c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" gracePeriod=10 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.889873 4766 generic.go:334] "Generic (PLEG): container finished" podID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerID="c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" exitCode=0 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.889934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.891825 4766 generic.go:334] "Generic (PLEG): container finished" podID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerID="244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0" exitCode=0 Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.892737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerDied","Data":"244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0"} Jan 30 16:45:40 crc kubenswrapper[4766]: I0130 16:45:40.932309 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.138446 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.138406 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.350452 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.355312 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447455 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447608 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447650 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447752 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447776 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") pod \"7639b60e-a348-4203-84b6-68af413cd517\" (UID: \"7639b60e-a348-4203-84b6-68af413cd517\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.447855 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") pod \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\" (UID: \"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae\") " Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.469398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs" (OuterVolumeSpecName: "kube-api-access-2q7gs") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "kube-api-access-2q7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.469497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7" (OuterVolumeSpecName: "kube-api-access-f5hb7") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "kube-api-access-f5hb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.480503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts" (OuterVolumeSpecName: "scripts") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.540336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data" (OuterVolumeSpecName: "config-data") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551467 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q7gs\" (UniqueName: \"kubernetes.io/projected/7639b60e-a348-4203-84b6-68af413cd517-kube-api-access-2q7gs\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551507 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5hb7\" (UniqueName: \"kubernetes.io/projected/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-kube-api-access-f5hb7\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551517 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.551544 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.569287 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7639b60e-a348-4203-84b6-68af413cd517" (UID: "7639b60e-a348-4203-84b6-68af413cd517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.601822 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config" (OuterVolumeSpecName: "config") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.609591 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.619462 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.622812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.630087 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" (UID: "0d9443ad-23f2-4953-8fe3-1e30cddbb3ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653617 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653662 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653673 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653682 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653690 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.653699 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7639b60e-a348-4203-84b6-68af413cd517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2sfxl" event={"ID":"7639b60e-a348-4203-84b6-68af413cd517","Type":"ContainerDied","Data":"a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855"} Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903580 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b32e14c48ab98ec0dc4ccb66deca2794d95c0f1a764e15ce4a040ed275b855" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.903252 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2sfxl" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-689xd" event={"ID":"0d9443ad-23f2-4953-8fe3-1e30cddbb3ae","Type":"ContainerDied","Data":"4beec3b7b2815bc010286da11d4373b366b5518d41bb70db8fd44faa4b14d146"} Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907834 4766 scope.go:117] "RemoveContainer" containerID="c65acb718d30ac6457c863184074fe84d257f4ac320cf7f985745ed5d35f59e2" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.907874 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-689xd" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.961768 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.967734 4766 scope.go:117] "RemoveContainer" containerID="4d2657555f1f9716d5dd3ad8f0603e91ccb9d9b3d7434f90175a66e09ade98bf" Jan 30 16:45:41 crc kubenswrapper[4766]: I0130 16:45:41.971083 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-689xd"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.062560 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" path="/var/lib/kubelet/pods/0d9443ad-23f2-4953-8fe3-1e30cddbb3ae/volumes" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.098524 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129215 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129672 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" containerID="cri-o://bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.129725 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" containerID="cri-o://0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.342014 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.489135 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.490195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.491215 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.491300 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") pod \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\" (UID: \"aeb40512-6ec4-4dd4-a623-ed2232387ee3\") " Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.496927 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts" (OuterVolumeSpecName: "scripts") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.497088 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx" (OuterVolumeSpecName: "kube-api-access-xwmmx") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "kube-api-access-xwmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.531024 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.537437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data" (OuterVolumeSpecName: "config-data") pod "aeb40512-6ec4-4dd4-a623-ed2232387ee3" (UID: "aeb40512-6ec4-4dd4-a623-ed2232387ee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593329 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593419 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593439 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmmx\" (UniqueName: \"kubernetes.io/projected/aeb40512-6ec4-4dd4-a623-ed2232387ee3-kube-api-access-xwmmx\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.593451 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb40512-6ec4-4dd4-a623-ed2232387ee3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.869975 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.870203 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" containerID="cri-o://e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" gracePeriod=30 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916582 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d5p85" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916576 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d5p85" event={"ID":"aeb40512-6ec4-4dd4-a623-ed2232387ee3","Type":"ContainerDied","Data":"d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5"} Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.916917 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e8fa91258ad408fb0e5fe2f36ffb083a7f80ad736cddc099769fad39b945a5" Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919089 4766 generic.go:334] "Generic (PLEG): container finished" podID="79d5404e-802d-42c7-9245-579f6724b524" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" exitCode=143 Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} Jan 30 16:45:42 crc kubenswrapper[4766]: I0130 16:45:42.919315 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" containerID="cri-o://1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" gracePeriod=30 Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.026772 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027466 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027561 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027639 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="init" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027742 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="init" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.027898 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.027995 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028072 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028404 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" containerName="nova-cell1-conductor-db-sync" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028514 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7639b60e-a348-4203-84b6-68af413cd517" containerName="nova-manage" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.028676 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9443ad-23f2-4953-8fe3-1e30cddbb3ae" containerName="dnsmasq-dns" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.034996 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.040994 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.053728 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102508 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102559 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.102621 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.204989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.205054 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.205089 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.210605 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.218123 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.260204 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"nova-cell1-conductor-0\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.358675 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.535922 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.614216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") pod \"17273647-f97c-490b-a766-fd4f004d3732\" (UID: \"17273647-f97c-490b-a766-fd4f004d3732\") " Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.622637 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m" (OuterVolumeSpecName: "kube-api-access-hpp5m") pod "17273647-f97c-490b-a766-fd4f004d3732" (UID: "17273647-f97c-490b-a766-fd4f004d3732"). InnerVolumeSpecName "kube-api-access-hpp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.719735 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpp5m\" (UniqueName: \"kubernetes.io/projected/17273647-f97c-490b-a766-fd4f004d3732-kube-api-access-hpp5m\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931026 4766 generic.go:334] "Generic (PLEG): container finished" podID="17273647-f97c-490b-a766-fd4f004d3732" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" exitCode=2 Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931065 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerDied","Data":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17273647-f97c-490b-a766-fd4f004d3732","Type":"ContainerDied","Data":"6ab83b607cb34660892c3f858dbee7a7095d74efd1f6621864cf951d1afb4fc6"} Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931108 4766 scope.go:117] "RemoveContainer" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.931242 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.974788 4766 scope.go:117] "RemoveContainer" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: E0130 16:45:43.976373 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": container with ID starting with e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a not found: ID does not exist" containerID="e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.976421 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a"} err="failed to get container status \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": rpc error: code = NotFound desc = could not find container \"e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a\": container with ID starting with e7df447fae2323214b04214ed5721ca2aff6ea2f59b7f5db7687b2d27c39a32a not found: ID does not exist" Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.981070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:43 crc kubenswrapper[4766]: I0130 16:45:43.998685 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: W0130 16:45:44.005028 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa69536_b701_43a4_814a_2ba16974b1dd.slice/crio-dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7 WatchSource:0}: Error finding container dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7: Status 404 returned error can't find the container with id dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7 Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.008946 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: E0130 16:45:44.009540 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.009564 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.009764 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17273647-f97c-490b-a766-fd4f004d3732" containerName="kube-state-metrics" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.010781 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.013155 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.019481 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.030368 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.053105 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17273647-f97c-490b-a766-fd4f004d3732" path="/var/lib/kubelet/pods/17273647-f97c-490b-a766-fd4f004d3732/volumes" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.053756 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127739 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.127974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.128068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.229929 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.230209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.235572 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.236859 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.244854 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.268141 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"kube-state-metrics-0\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.463021 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.913624 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:45:44 crc kubenswrapper[4766]: W0130 16:45:44.917987 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb576787_90a5_4e81_a047_6fcf37921335.slice/crio-004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0 WatchSource:0}: Error finding container 004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0: Status 404 returned error can't find the container with id 004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0 Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.964127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerStarted","Data":"7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.966533 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.966562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerStarted","Data":"dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.967707 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerStarted","Data":"004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0"} Jan 30 16:45:44 crc kubenswrapper[4766]: I0130 16:45:44.988384 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.988365167 podStartE2EDuration="2.988365167s" podCreationTimestamp="2026-01-30 16:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:44.98154534 +0000 UTC m=+1399.619502696" watchObservedRunningTime="2026-01-30 16:45:44.988365167 +0000 UTC m=+1399.626322513" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.081743 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082202 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" containerID="cri-o://3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082837 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" containerID="cri-o://095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082907 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" containerID="cri-o://c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.082958 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" containerID="cri-o://d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" gracePeriod=30 Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.121394 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.124193 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.125720 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:45:45 crc kubenswrapper[4766]: E0130 16:45:45.125754 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.981210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerStarted","Data":"b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.981522 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984348 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" exitCode=0 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984843 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" exitCode=2 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984922 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" exitCode=0 Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.984415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.985036 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} Jan 30 16:45:45 crc kubenswrapper[4766]: I0130 16:45:45.985058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} Jan 30 16:45:46 crc kubenswrapper[4766]: I0130 16:45:46.026463 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.661777633 podStartE2EDuration="3.026443156s" podCreationTimestamp="2026-01-30 16:45:43 +0000 UTC" firstStartedPulling="2026-01-30 16:45:44.919967255 +0000 UTC m=+1399.557924601" lastFinishedPulling="2026-01-30 16:45:45.284632788 +0000 UTC m=+1399.922590124" observedRunningTime="2026-01-30 16:45:46.018770575 +0000 UTC m=+1400.656727921" watchObservedRunningTime="2026-01-30 16:45:46.026443156 +0000 UTC m=+1400.664400502" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.532715 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.535305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.619992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620022 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620047 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620122 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") pod \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\" (UID: \"c5a66dd3-f929-4a64-a1c3-82731fbe06e6\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") pod \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\" (UID: \"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63\") " Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620657 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.620907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.625046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts" (OuterVolumeSpecName: "scripts") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.643211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l" (OuterVolumeSpecName: "kube-api-access-wh88l") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "kube-api-access-wh88l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.643286 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2" (OuterVolumeSpecName: "kube-api-access-pvws2") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "kube-api-access-pvws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.658553 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.679388 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.694404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data" (OuterVolumeSpecName: "config-data") pod "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" (UID: "aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723626 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723655 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723667 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvws2\" (UniqueName: \"kubernetes.io/projected/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-kube-api-access-pvws2\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723676 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723684 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723692 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh88l\" (UniqueName: \"kubernetes.io/projected/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-kube-api-access-wh88l\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723700 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.723708 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.749126 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.768009 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data" (OuterVolumeSpecName: "config-data") pod "c5a66dd3-f929-4a64-a1c3-82731fbe06e6" (UID: "c5a66dd3-f929-4a64-a1c3-82731fbe06e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.826922 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.826954 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a66dd3-f929-4a64-a1c3-82731fbe06e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:47 crc kubenswrapper[4766]: I0130 16:45:47.925500 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006695 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5a66dd3-f929-4a64-a1c3-82731fbe06e6","Type":"ContainerDied","Data":"254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.006852 4766 scope.go:117] "RemoveContainer" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.007017 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018597 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018660 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerDied","Data":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.018786 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63","Type":"ContainerDied","Data":"e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024445 4766 generic.go:334] "Generic (PLEG): container finished" podID="79d5404e-802d-42c7-9245-579f6724b524" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" exitCode=0 Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024474 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79d5404e-802d-42c7-9245-579f6724b524","Type":"ContainerDied","Data":"02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b"} Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.024531 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029125 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029321 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029389 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") pod \"79d5404e-802d-42c7-9245-579f6724b524\" (UID: \"79d5404e-802d-42c7-9245-579f6724b524\") " Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.029963 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs" (OuterVolumeSpecName: "logs") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.038140 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m" (OuterVolumeSpecName: "kube-api-access-lbq7m") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "kube-api-access-lbq7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.064479 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data" (OuterVolumeSpecName: "config-data") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.067429 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d5404e-802d-42c7-9245-579f6724b524" (UID: "79d5404e-802d-42c7-9245-579f6724b524"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131210 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131250 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79d5404e-802d-42c7-9245-579f6724b524-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131259 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d5404e-802d-42c7-9245-579f6724b524-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.131270 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbq7m\" (UniqueName: \"kubernetes.io/projected/79d5404e-802d-42c7-9245-579f6724b524-kube-api-access-lbq7m\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.153949 4766 scope.go:117] "RemoveContainer" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.183239 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.194878 4766 scope.go:117] "RemoveContainer" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.208915 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.220422 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.240595 4766 scope.go:117] "RemoveContainer" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244130 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244600 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244613 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244630 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244639 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244669 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244675 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244686 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244692 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244700 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244706 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244723 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244731 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.244746 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244752 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244920 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-central-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244934 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-log" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244946 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="sg-core" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244954 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" containerName="nova-scheduler-scheduler" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244969 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="ceilometer-notification-agent" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244979 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" containerName="proxy-httpd" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.244994 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d5404e-802d-42c7-9245-579f6724b524" containerName="nova-api-api" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.258425 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.258533 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.261383 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.267717 4766 scope.go:117] "RemoveContainer" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.268361 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": container with ID starting with 095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb not found: ID does not exist" containerID="095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.268420 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb"} err="failed to get container status \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": rpc error: code = NotFound desc = could not find container \"095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb\": container with ID starting with 095fbd04500fabe32c5529637fb29524bb631970072bad270f7a2ca05c8984eb not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.268456 4766 scope.go:117] "RemoveContainer" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.269212 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": container with ID starting with c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14 not found: ID does not exist" containerID="c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269259 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14"} err="failed to get container status \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": rpc error: code = NotFound desc = could not find container \"c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14\": container with ID starting with c2354b49358944598e501410cfcfb12f2f2f0e5dbbd8985d03806c1dbd2dee14 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269404 4766 scope.go:117] "RemoveContainer" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.269951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": container with ID starting with d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2 not found: ID does not exist" containerID="d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.269969 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2"} err="failed to get container status \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": rpc error: code = NotFound desc = could not find container \"d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2\": container with ID starting with d7000a4a86f88652cbcf7efd3a70364e4333986ca8785e8a1e1b51af011f84a2 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.270081 4766 scope.go:117] "RemoveContainer" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.271500 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": container with ID starting with 3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0 not found: ID does not exist" containerID="3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.271547 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0"} err="failed to get container status \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": rpc error: code = NotFound desc = could not find container \"3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0\": container with ID starting with 3ee57530bf58f120ed8444d7871c4320f84fdbdb876a1a3538ae1dbd71c9fce0 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.271577 4766 scope.go:117] "RemoveContainer" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.272247 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.285288 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.288090 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290340 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290653 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.290850 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.293276 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.298031 4766 scope.go:117] "RemoveContainer" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.301806 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": container with ID starting with 1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5 not found: ID does not exist" containerID="1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.301843 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5"} err="failed to get container status \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": rpc error: code = NotFound desc = could not find container \"1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5\": container with ID starting with 1b1a2904ce23cae48baba8e63ee622ca248edd69cb5a2d3b876a5a4e606607d5 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.301868 4766 scope.go:117] "RemoveContainer" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.332058 4766 scope.go:117] "RemoveContainer" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334376 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334477 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334499 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334561 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334895 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.334927 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.335110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.363291 4766 scope.go:117] "RemoveContainer" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.365101 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": container with ID starting with 0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf not found: ID does not exist" containerID="0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.365142 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf"} err="failed to get container status \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": rpc error: code = NotFound desc = could not find container \"0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf\": container with ID starting with 0eb138c4b0d9c3e478ee70cc70d6bcf339b2097c5f2e77f54e8392a51dc7cccf not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.365171 4766 scope.go:117] "RemoveContainer" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.366545 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": container with ID starting with bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72 not found: ID does not exist" containerID="bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.366579 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72"} err="failed to get container status \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": rpc error: code = NotFound desc = could not find container \"bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72\": container with ID starting with bf9c2c64ce4dc263c25693c43e19bbb4bc11a93e8ffdf74b6c65121f4ef29b72 not found: ID does not exist" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.377096 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.391582 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.404799 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.406305 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.412470 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:45:48 crc kubenswrapper[4766]: E0130 16:45:48.428344 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a66dd3_f929_4a64_a1c3_82731fbe06e6.slice/crio-254afe617ee7d083f8aef7d6025266a07966124e61977849a39348c5dd429afe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0b5877_d5fe_4d24_aaaa_d88eedb8ef63.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d5404e_802d_42c7_9245_579f6724b524.slice/crio-02b9f2097968ae69cd7109fa143ebd5cddb3e07d1afbc01d074eaa6ede05fb7b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0b5877_d5fe_4d24_aaaa_d88eedb8ef63.slice/crio-e0faf2b25288d8c56af242de92e6a4e63d3647846b88fc5ff898477a334052e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a66dd3_f929_4a64_a1c3_82731fbe06e6.slice\": RecentStats: unable to find data in memory cache]" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.431611 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436579 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436703 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436799 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436828 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436854 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.436984 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437006 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437047 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437069 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437101 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.437843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.452091 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.455924 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456411 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456727 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.456840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.463361 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"nova-scheduler-0\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.465006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"ceilometer-0\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539242 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.539472 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.540110 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.542843 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.543320 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.556733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"nova-api-0\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.583670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.606797 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.725578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:45:48 crc kubenswrapper[4766]: I0130 16:45:48.867386 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.046139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerStarted","Data":"416558aff7b28c3ff1ea22294f12594e969f7f4faf03939457f56d9bd99a3f11"} Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.215659 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: I0130 16:45:49.332161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:45:49 crc kubenswrapper[4766]: W0130 16:45:49.340979 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f697e9a_6e36_40c9_a199_29dc8ec19900.slice/crio-15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a WatchSource:0}: Error finding container 15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a: Status 404 returned error can't find the container with id 15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.050992 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d5404e-802d-42c7-9245-579f6724b524" path="/var/lib/kubelet/pods/79d5404e-802d-42c7-9245-579f6724b524/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.052045 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63" path="/var/lib/kubelet/pods/aa0b5877-d5fe-4d24-aaaa-d88eedb8ef63/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.052616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a66dd3-f929-4a64-a1c3-82731fbe06e6" path="/var/lib/kubelet/pods/c5a66dd3-f929-4a64-a1c3-82731fbe06e6/volumes" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073377 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.073387 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerStarted","Data":"15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.074882 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerStarted","Data":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.078492 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.078532 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"8b86ca0ddd886dfba467ba83639ed4630d6babe59e46210d85c130eb9061c10d"} Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.103823 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.10380739 podStartE2EDuration="2.10380739s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:50.097751693 +0000 UTC m=+1404.735709039" watchObservedRunningTime="2026-01-30 16:45:50.10380739 +0000 UTC m=+1404.741764736" Jan 30 16:45:50 crc kubenswrapper[4766]: I0130 16:45:50.115065 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.115047269 podStartE2EDuration="2.115047269s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:45:50.111249585 +0000 UTC m=+1404.749206931" watchObservedRunningTime="2026-01-30 16:45:50.115047269 +0000 UTC m=+1404.753004615" Jan 30 16:45:51 crc kubenswrapper[4766]: I0130 16:45:51.091395 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3"} Jan 30 16:45:52 crc kubenswrapper[4766]: I0130 16:45:52.101614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e"} Jan 30 16:45:53 crc kubenswrapper[4766]: I0130 16:45:53.395031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 16:45:53 crc kubenswrapper[4766]: I0130 16:45:53.584339 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:45:54 crc kubenswrapper[4766]: I0130 16:45:54.474391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.134316 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerStarted","Data":"10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17"} Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.135392 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:45:55 crc kubenswrapper[4766]: I0130 16:45:55.157862 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.521245334 podStartE2EDuration="7.157841533s" podCreationTimestamp="2026-01-30 16:45:48 +0000 UTC" firstStartedPulling="2026-01-30 16:45:49.225057564 +0000 UTC m=+1403.863014910" lastFinishedPulling="2026-01-30 16:45:53.861653753 +0000 UTC m=+1408.499611109" observedRunningTime="2026-01-30 16:45:55.153213417 +0000 UTC m=+1409.791170773" watchObservedRunningTime="2026-01-30 16:45:55.157841533 +0000 UTC m=+1409.795798879" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.584136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.617767 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.726800 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:58 crc kubenswrapper[4766]: I0130 16:45:58.726878 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.194642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.811373 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:45:59 crc kubenswrapper[4766]: I0130 16:45:59.811700 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.237961 4766 generic.go:334] "Generic (PLEG): container finished" podID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerID="135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" exitCode=137 Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.238060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerDied","Data":"135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240532 4766 generic.go:334] "Generic (PLEG): container finished" podID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerID="6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" exitCode=137 Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240578 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e111c80e-0c45-49f2-bfc0-665fbdd2ac56","Type":"ContainerDied","Data":"7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d"} Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.240618 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e91b518e9df7a0df66e77c94960005b770ffdf960887cd6b2ab17f156b3e56d" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.261441 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.375999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376056 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376127 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") pod \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\" (UID: \"e111c80e-0c45-49f2-bfc0-665fbdd2ac56\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.376711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs" (OuterVolumeSpecName: "logs") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.377332 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.381295 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44" (OuterVolumeSpecName: "kube-api-access-l2n44") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "kube-api-access-l2n44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.410408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.410982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data" (OuterVolumeSpecName: "config-data") pod "e111c80e-0c45-49f2-bfc0-665fbdd2ac56" (UID: "e111c80e-0c45-49f2-bfc0-665fbdd2ac56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478897 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478930 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.478944 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2n44\" (UniqueName: \"kubernetes.io/projected/e111c80e-0c45-49f2-bfc0-665fbdd2ac56-kube-api-access-l2n44\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.637073 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.790937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.790983 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.791010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") pod \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\" (UID: \"e0275f96-c8b4-4219-8a95-f8cfa7a4edca\") " Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.795097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk" (OuterVolumeSpecName: "kube-api-access-xxcbk") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "kube-api-access-xxcbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.818958 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data" (OuterVolumeSpecName: "config-data") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.820576 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0275f96-c8b4-4219-8a95-f8cfa7a4edca" (UID: "e0275f96-c8b4-4219-8a95-f8cfa7a4edca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893595 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcbk\" (UniqueName: \"kubernetes.io/projected/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-kube-api-access-xxcbk\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893626 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:06 crc kubenswrapper[4766]: I0130 16:46:06.893635 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0275f96-c8b4-4219-8a95-f8cfa7a4edca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0275f96-c8b4-4219-8a95-f8cfa7a4edca","Type":"ContainerDied","Data":"d03cdc6170eebcf6ba04199860083b79a704186bcc24a8f0c94fb427aa1473a0"} Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250228 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250585 4766 scope.go:117] "RemoveContainer" containerID="135c1956a860be59824b856b724e9e55eaa85db098e7c6b8d270f3404e379bf5" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.250269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.285434 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.304914 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.321231 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.331892 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.347661 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348096 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348113 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348124 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348133 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: E0130 16:46:07.348154 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348160 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348359 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-log" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348371 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.348383 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" containerName="nova-metadata-metadata" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.349026 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352011 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352119 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.352258 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.354832 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.356696 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.359673 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.359964 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.367657 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.378647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504453 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.504967 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505125 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.505293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606580 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606688 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606770 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606868 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.606905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.607884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.612070 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.612881 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613152 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.613747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.615658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.616667 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.624319 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.628272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"nova-metadata-0\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " pod="openstack/nova-metadata-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.678314 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:07 crc kubenswrapper[4766]: I0130 16:46:07.707762 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.057534 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0275f96-c8b4-4219-8a95-f8cfa7a4edca" path="/var/lib/kubelet/pods/e0275f96-c8b4-4219-8a95-f8cfa7a4edca/volumes" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.058511 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e111c80e-0c45-49f2-bfc0-665fbdd2ac56" path="/var/lib/kubelet/pods/e111c80e-0c45-49f2-bfc0-665fbdd2ac56/volumes" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.234004 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.261039 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"00d52323719cdcf153e25b7a1622f149993ee5f6d853ba11e47ebf2bd0e4a738"} Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.303565 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:46:08 crc kubenswrapper[4766]: W0130 16:46:08.306652 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2852c370_2b06_4a98_9d48_190ed09dc7fb.slice/crio-e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f WatchSource:0}: Error finding container e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f: Status 404 returned error can't find the container with id e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.729628 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.730676 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.732721 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:08 crc kubenswrapper[4766]: I0130 16:46:08.733476 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.272147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.272479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerStarted","Data":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.275328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerStarted","Data":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.275363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerStarted","Data":"e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f"} Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.276250 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.279320 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.296978 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.296961489 podStartE2EDuration="2.296961489s" podCreationTimestamp="2026-01-30 16:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:09.28934217 +0000 UTC m=+1423.927299536" watchObservedRunningTime="2026-01-30 16:46:09.296961489 +0000 UTC m=+1423.934918835" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.333414 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.333393891 podStartE2EDuration="2.333393891s" podCreationTimestamp="2026-01-30 16:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:09.327003596 +0000 UTC m=+1423.964960942" watchObservedRunningTime="2026-01-30 16:46:09.333393891 +0000 UTC m=+1423.971351237" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.491511 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.493741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.500703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.671841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672430 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.672587 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775137 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775360 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775415 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.775463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776739 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.776913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.777092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.809066 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dnsmasq-dns-59cf4bdb65-zcjhs\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:09 crc kubenswrapper[4766]: I0130 16:46:09.840825 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:10 crc kubenswrapper[4766]: W0130 16:46:10.353489 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc575168_b373_41ba_9dd6_2d9d168a6527.slice/crio-5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6 WatchSource:0}: Error finding container 5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6: Status 404 returned error can't find the container with id 5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6 Jan 30 16:46:10 crc kubenswrapper[4766]: I0130 16:46:10.354582 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292503 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerID="171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e" exitCode=0 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e"} Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.292904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerStarted","Data":"5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6"} Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.821052 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822054 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" containerID="cri-o://10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822093 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" containerID="cri-o://0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822152 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" containerID="cri-o://88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.822497 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" containerID="cri-o://f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" gracePeriod=30 Jan 30 16:46:11 crc kubenswrapper[4766]: I0130 16:46:11.844769 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.210394 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.302206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerStarted","Data":"961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.302639 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.305971 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.305996 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" exitCode=2 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306005 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306030 4766 generic.go:334] "Generic (PLEG): container finished" podID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerID="f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" exitCode=0 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306256 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" containerID="cri-o://d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" gracePeriod=30 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306293 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306312 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f"} Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.306555 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" containerID="cri-o://7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" gracePeriod=30 Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.330995 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" podStartSLOduration=3.330978434 podStartE2EDuration="3.330978434s" podCreationTimestamp="2026-01-30 16:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:12.322996452 +0000 UTC m=+1426.960953798" watchObservedRunningTime="2026-01-30 16:46:12.330978434 +0000 UTC m=+1426.968935780" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.678483 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.708561 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.708618 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.831780 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950221 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950304 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950369 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950395 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950463 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") pod \"4682a3ba-d8f2-48f0-820c-961ee175193e\" (UID: \"4682a3ba-d8f2-48f0-820c-961ee175193e\") " Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.950749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.951083 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.951270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.957563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts" (OuterVolumeSpecName: "scripts") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.958037 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh" (OuterVolumeSpecName: "kube-api-access-f9wmh") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "kube-api-access-f9wmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:12 crc kubenswrapper[4766]: I0130 16:46:12.981967 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.029408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053222 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4682a3ba-d8f2-48f0-820c-961ee175193e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053261 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053274 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053286 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.053297 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wmh\" (UniqueName: \"kubernetes.io/projected/4682a3ba-d8f2-48f0-820c-961ee175193e-kube-api-access-f9wmh\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.055728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.096336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data" (OuterVolumeSpecName: "config-data") pod "4682a3ba-d8f2-48f0-820c-961ee175193e" (UID: "4682a3ba-d8f2-48f0-820c-961ee175193e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.154314 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.154364 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4682a3ba-d8f2-48f0-820c-961ee175193e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.315988 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4682a3ba-d8f2-48f0-820c-961ee175193e","Type":"ContainerDied","Data":"8b86ca0ddd886dfba467ba83639ed4630d6babe59e46210d85c130eb9061c10d"} Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.316044 4766 scope.go:117] "RemoveContainer" containerID="10c4ecee1cc3249bb8bb9e76e30cec2a7de20f074c2c187438eb8244558c1a17" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.316070 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.318121 4766 generic.go:334] "Generic (PLEG): container finished" podID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" exitCode=143 Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.318226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.401567 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.410633 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.423621 4766 scope.go:117] "RemoveContainer" containerID="0f2730cffdbcf2a7d668c54d27d193919e51030eb1b48406db509abf3aab1a5e" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.428743 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429178 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429204 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429216 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429222 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429247 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429253 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: E0130 16:46:13.429271 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429277 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429436 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="proxy-httpd" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429450 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-notification-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429466 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="ceilometer-central-agent" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.429483 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" containerName="sg-core" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.445025 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.449369 4766 scope.go:117] "RemoveContainer" containerID="88669538db1b407b13af54044dcfe7446f733bbfee3afd84694a09deab2733d3" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450215 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450521 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.450638 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.451658 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.499045 4766 scope.go:117] "RemoveContainer" containerID="f3b4d5555e6683d7c9a35452956e7db3f892b4d66ffc3b24f2410f434ccab80f" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565377 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.565492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566031 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566170 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.566236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668234 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668286 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668312 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668450 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668512 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.668545 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.669551 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.669621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.672820 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.673480 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.675906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.677311 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.683267 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.699383 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"ceilometer-0\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.782417 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:13 crc kubenswrapper[4766]: I0130 16:46:13.815905 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.049593 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4682a3ba-d8f2-48f0-820c-961ee175193e" path="/var/lib/kubelet/pods/4682a3ba-d8f2-48f0-820c-961ee175193e/volumes" Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.244163 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:14 crc kubenswrapper[4766]: W0130 16:46:14.249451 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01bf866a_799b_42df_8838_91933afbb104.slice/crio-c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488 WatchSource:0}: Error finding container c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488: Status 404 returned error can't find the container with id c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488 Jan 30 16:46:14 crc kubenswrapper[4766]: I0130 16:46:14.329453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488"} Jan 30 16:46:15 crc kubenswrapper[4766]: I0130 16:46:15.339386 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} Jan 30 16:46:15 crc kubenswrapper[4766]: I0130 16:46:15.977369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026897 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026973 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.026999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") pod \"0f697e9a-6e36-40c9-a199-29dc8ec19900\" (UID: \"0f697e9a-6e36-40c9-a199-29dc8ec19900\") " Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.028748 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs" (OuterVolumeSpecName: "logs") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.033376 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb" (OuterVolumeSpecName: "kube-api-access-gb8mb") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "kube-api-access-gb8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.071029 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.087346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data" (OuterVolumeSpecName: "config-data") pod "0f697e9a-6e36-40c9-a199-29dc8ec19900" (UID: "0f697e9a-6e36-40c9-a199-29dc8ec19900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130147 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb8mb\" (UniqueName: \"kubernetes.io/projected/0f697e9a-6e36-40c9-a199-29dc8ec19900-kube-api-access-gb8mb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130208 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f697e9a-6e36-40c9-a199-29dc8ec19900-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130222 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.130234 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f697e9a-6e36-40c9-a199-29dc8ec19900-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.350218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353168 4766 generic.go:334] "Generic (PLEG): container finished" podID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" exitCode=0 Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353275 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353280 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353394 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f697e9a-6e36-40c9-a199-29dc8ec19900","Type":"ContainerDied","Data":"15c72c9ebd81d0974bd3c050d1376a74f29d2862d24231bcedf81abd624b957a"} Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.353419 4766 scope.go:117] "RemoveContainer" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.385152 4766 scope.go:117] "RemoveContainer" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.391316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.399680 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415109 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.415470 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.415514 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.415520 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.416962 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-log" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.416998 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" containerName="nova-api-api" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.417981 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.422193 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.422303 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.424379 4766 scope.go:117] "RemoveContainer" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.426447 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": container with ID starting with 7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0 not found: ID does not exist" containerID="7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.426482 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0"} err="failed to get container status \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": rpc error: code = NotFound desc = could not find container \"7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0\": container with ID starting with 7b848af7eef8b1e8cb177dd175a8103f075e0d709a2cd7d271102825954435f0 not found: ID does not exist" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.426509 4766 scope.go:117] "RemoveContainer" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: E0130 16:46:16.428000 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": container with ID starting with d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9 not found: ID does not exist" containerID="d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.428034 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9"} err="failed to get container status \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": rpc error: code = NotFound desc = could not find container \"d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9\": container with ID starting with d0e58709172611be47a7d079cf51386583460c1b5715a8c2e52ad8dc28416bb9 not found: ID does not exist" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.431562 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.434463 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.434940 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.435094 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.442875 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536502 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536706 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.536750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.538840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551123 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551594 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.551771 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.554989 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"nova-api-0\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " pod="openstack/nova-api-0" Jan 30 16:46:16 crc kubenswrapper[4766]: I0130 16:46:16.732754 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.198701 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.369481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"1a67e71d5d71a3f934c66c454b741f0b3ac1c9d352fcd86ce01318614ddc8465"} Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.679026 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.696283 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.709057 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:17 crc kubenswrapper[4766]: I0130 16:46:17.709287 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.051215 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f697e9a-6e36-40c9-a199-29dc8ec19900" path="/var/lib/kubelet/pods/0f697e9a-6e36-40c9-a199-29dc8ec19900/volumes" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.385459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.387624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.387687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerStarted","Data":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.429267 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.430914 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.430891187 podStartE2EDuration="2.430891187s" podCreationTimestamp="2026-01-30 16:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:18.406214397 +0000 UTC m=+1433.044171743" watchObservedRunningTime="2026-01-30 16:46:18.430891187 +0000 UTC m=+1433.068848533" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.578161 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.579546 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.582439 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.582643 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.587841 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704244 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704456 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704513 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.704543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.726152 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.726209 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806452 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806582 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.806661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.829957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.829954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.838888 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.843986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"nova-cell1-cell-mapping-rlpcs\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:18 crc kubenswrapper[4766]: I0130 16:46:18.902912 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.579524 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.842348 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.905169 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:19 crc kubenswrapper[4766]: I0130 16:46:19.905479 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" containerID="cri-o://89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" gracePeriod=10 Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.412953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerStarted","Data":"a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.413373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerStarted","Data":"e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420139 4766 generic.go:334] "Generic (PLEG): container finished" podID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerID="89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" exitCode=0 Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420213 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" event={"ID":"d92d5f78-a271-41e7-bde9-410e3db6ee58","Type":"ContainerDied","Data":"91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc"} Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.420259 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e65d6f256f9a51617953fca1ed6a1c2f94a9d6c711363f89bd2892d38340cc" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.434207 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rlpcs" podStartSLOduration=2.434159729 podStartE2EDuration="2.434159729s" podCreationTimestamp="2026-01-30 16:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:20.428079899 +0000 UTC m=+1435.066037255" watchObservedRunningTime="2026-01-30 16:46:20.434159729 +0000 UTC m=+1435.072117075" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.520343 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663452 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663521 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663634 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.663793 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") pod \"d92d5f78-a271-41e7-bde9-410e3db6ee58\" (UID: \"d92d5f78-a271-41e7-bde9-410e3db6ee58\") " Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.670367 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67" (OuterVolumeSpecName: "kube-api-access-nhm67") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "kube-api-access-nhm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.732037 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.734266 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.737955 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.747765 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.755098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config" (OuterVolumeSpecName: "config") pod "d92d5f78-a271-41e7-bde9-410e3db6ee58" (UID: "d92d5f78-a271-41e7-bde9-410e3db6ee58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766270 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766305 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766315 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766325 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhm67\" (UniqueName: \"kubernetes.io/projected/d92d5f78-a271-41e7-bde9-410e3db6ee58-kube-api-access-nhm67\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766333 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:20 crc kubenswrapper[4766]: I0130 16:46:20.766356 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d92d5f78-a271-41e7-bde9-410e3db6ee58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.428439 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lx7hm" Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.510971 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:21 crc kubenswrapper[4766]: I0130 16:46:21.521156 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lx7hm"] Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.058849 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" path="/var/lib/kubelet/pods/d92d5f78-a271-41e7-bde9-410e3db6ee58/volumes" Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.439804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerStarted","Data":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.439959 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" containerID="cri-o://08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.440238 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441551 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" containerID="cri-o://5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441609 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" containerID="cri-o://ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.441648 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" containerID="cri-o://0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" gracePeriod=30 Jan 30 16:46:22 crc kubenswrapper[4766]: I0130 16:46:22.468402 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.387301428 podStartE2EDuration="9.468384755s" podCreationTimestamp="2026-01-30 16:46:13 +0000 UTC" firstStartedPulling="2026-01-30 16:46:14.254902289 +0000 UTC m=+1428.892859635" lastFinishedPulling="2026-01-30 16:46:21.335985616 +0000 UTC m=+1435.973942962" observedRunningTime="2026-01-30 16:46:22.459285051 +0000 UTC m=+1437.097242397" watchObservedRunningTime="2026-01-30 16:46:22.468384755 +0000 UTC m=+1437.106342101" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.193334 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319468 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319512 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319566 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319686 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319895 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.319925 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") pod \"01bf866a-799b-42df-8838-91933afbb104\" (UID: \"01bf866a-799b-42df-8838-91933afbb104\") " Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.323649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.323818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.326491 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd" (OuterVolumeSpecName: "kube-api-access-pgmvd") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "kube-api-access-pgmvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.327816 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts" (OuterVolumeSpecName: "scripts") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.351971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.384908 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423469 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgmvd\" (UniqueName: \"kubernetes.io/projected/01bf866a-799b-42df-8838-91933afbb104-kube-api-access-pgmvd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423506 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423517 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423525 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01bf866a-799b-42df-8838-91933afbb104-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423534 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.423542 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.445541 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476873 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476908 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" exitCode=2 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476917 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476926 4766 generic.go:334] "Generic (PLEG): container finished" podID="01bf866a-799b-42df-8838-91933afbb104" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" exitCode=0 Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01bf866a-799b-42df-8838-91933afbb104","Type":"ContainerDied","Data":"c1b554db47d10f6ccbba7db486a12601a23becc11b2582c73eecf6b917aa1488"} Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.476996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data" (OuterVolumeSpecName: "config-data") pod "01bf866a-799b-42df-8838-91933afbb104" (UID: "01bf866a-799b-42df-8838-91933afbb104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477014 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.477018 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.510868 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.525752 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.525803 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bf866a-799b-42df-8838-91933afbb104-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.547775 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.551227 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.555884 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619233 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619815 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619834 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619856 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="init" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619861 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="init" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619870 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619876 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619887 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619893 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619908 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.619929 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.619936 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620105 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92d5f78-a271-41e7-bde9-410e3db6ee58" containerName="dnsmasq-dns" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620122 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="proxy-httpd" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620134 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-notification-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620144 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="ceilometer-central-agent" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.620154 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bf866a-799b-42df-8838-91933afbb104" containerName="sg-core" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.621694 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.630991 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631893 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.631968 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.656214 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.692834 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.693307 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.693659 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.693696 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.694430 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.694601 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.694631 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.694973 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695010 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695033 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: E0130 16:46:23.695412 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695453 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.695486 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696125 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696154 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696439 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696464 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.696969 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697004 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697350 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697375 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697774 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.697792 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698062 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698087 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698334 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698353 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698544 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698565 4766 scope.go:117] "RemoveContainer" containerID="5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698737 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c"} err="failed to get container status \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": rpc error: code = NotFound desc = could not find container \"5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c\": container with ID starting with 5e49ca9825f18a3323c612c7e702c9224ce8465815d066fdab56feda3e48253c not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698752 4766 scope.go:117] "RemoveContainer" containerID="ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698922 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325"} err="failed to get container status \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": rpc error: code = NotFound desc = could not find container \"ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325\": container with ID starting with ad340c8aca0029e4920f80190b92950d94ca01049812f06f02a9c3b0f61d7325 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.698935 4766 scope.go:117] "RemoveContainer" containerID="0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699121 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036"} err="failed to get container status \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": rpc error: code = NotFound desc = could not find container \"0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036\": container with ID starting with 0f0ea8581bb8c99d60e447f2010a7ab429d09e86c7ee54e9e05c6bc4571b9036 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699133 4766 scope.go:117] "RemoveContainer" containerID="08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.699396 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68"} err="failed to get container status \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": rpc error: code = NotFound desc = could not find container \"08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68\": container with ID starting with 08180bf41a76e968f7d6e5f974cb00ab2c9fe39ff71338a5bfd851705e724e68 not found: ID does not exist" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738245 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738284 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738315 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738358 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738443 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.738512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839546 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839603 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839625 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839709 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839805 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839863 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.839884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.840254 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.840866 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.843798 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.843932 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.844815 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.853788 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.854450 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.855563 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"ceilometer-0\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " pod="openstack/ceilometer-0" Jan 30 16:46:23 crc kubenswrapper[4766]: I0130 16:46:23.946448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.067394 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bf866a-799b-42df-8838-91933afbb104" path="/var/lib/kubelet/pods/01bf866a-799b-42df-8838-91933afbb104/volumes" Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.436574 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:46:24 crc kubenswrapper[4766]: I0130 16:46:24.491467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"9e20509f1f367971ebad4df00092bfa9e6a737cd37ee5f2217bf7f1fb1c22b6c"} Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.511659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a"} Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.513705 4766 generic.go:334] "Generic (PLEG): container finished" podID="c683df85-82ee-4038-883c-c47b3aa46bec" containerID="a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d" exitCode=0 Jan 30 16:46:25 crc kubenswrapper[4766]: I0130 16:46:25.513750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerDied","Data":"a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d"} Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.524137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6"} Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.734131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.736445 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.875410 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936586 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936746 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.936869 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") pod \"c683df85-82ee-4038-883c-c47b3aa46bec\" (UID: \"c683df85-82ee-4038-883c-c47b3aa46bec\") " Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.944333 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv" (OuterVolumeSpecName: "kube-api-access-qrlmv") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "kube-api-access-qrlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.958579 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts" (OuterVolumeSpecName: "scripts") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:26 crc kubenswrapper[4766]: I0130 16:46:26.993749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data" (OuterVolumeSpecName: "config-data") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.020358 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c683df85-82ee-4038-883c-c47b3aa46bec" (UID: "c683df85-82ee-4038-883c-c47b3aa46bec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038383 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038570 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038681 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrlmv\" (UniqueName: \"kubernetes.io/projected/c683df85-82ee-4038-883c-c47b3aa46bec-kube-api-access-qrlmv\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.038749 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c683df85-82ee-4038-883c-c47b3aa46bec-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.534803 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rlpcs" event={"ID":"c683df85-82ee-4038-883c-c47b3aa46bec","Type":"ContainerDied","Data":"e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe"} Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.535922 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12619d95d16f1a55e971e5eb02655b9537d6b5b6e1489ce81521828eefdfcbe" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.534831 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rlpcs" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.537566 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4"} Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708233 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708478 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" containerID="cri-o://5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.708940 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" containerID="cri-o://4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.723506 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.723719 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.740046 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.740401 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" containerID="cri-o://23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" gracePeriod=30 Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.748429 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.761669 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.762499 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:27 crc kubenswrapper[4766]: I0130 16:46:27.795128 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.548835 4766 generic.go:334] "Generic (PLEG): container finished" podID="23e893e4-3d60-421d-ad41-bc0f76112015" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" exitCode=143 Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.548899 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} Jan 30 16:46:28 crc kubenswrapper[4766]: I0130 16:46:28.555989 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.586066 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.587300 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.588512 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:46:28 crc kubenswrapper[4766]: E0130 16:46:28.588554 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.558404 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" containerID="cri-o://817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" gracePeriod=30 Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.558766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerStarted","Data":"858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4"} Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.559203 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" containerID="cri-o://f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" gracePeriod=30 Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.560608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:46:29 crc kubenswrapper[4766]: I0130 16:46:29.589885 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.006412179 podStartE2EDuration="6.589866821s" podCreationTimestamp="2026-01-30 16:46:23 +0000 UTC" firstStartedPulling="2026-01-30 16:46:24.432162964 +0000 UTC m=+1439.070120310" lastFinishedPulling="2026-01-30 16:46:29.015617606 +0000 UTC m=+1443.653574952" observedRunningTime="2026-01-30 16:46:29.582084263 +0000 UTC m=+1444.220041609" watchObservedRunningTime="2026-01-30 16:46:29.589866821 +0000 UTC m=+1444.227824167" Jan 30 16:46:30 crc kubenswrapper[4766]: I0130 16:46:30.569498 4766 generic.go:334] "Generic (PLEG): container finished" podID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" exitCode=143 Jan 30 16:46:30 crc kubenswrapper[4766]: I0130 16:46:30.573167 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.542829 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584719 4766 generic.go:334] "Generic (PLEG): container finished" podID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" exitCode=0 Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584769 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerDied","Data":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861","Type":"ContainerDied","Data":"416558aff7b28c3ff1ea22294f12594e969f7f4faf03939457f56d9bd99a3f11"} Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584812 4766 scope.go:117] "RemoveContainer" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.584923 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.642881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.643118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.643238 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") pod \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\" (UID: \"e7e7ef23-9d73-45f9-aeae-9fb0bf16b861\") " Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.652944 4766 scope.go:117] "RemoveContainer" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.658613 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8" (OuterVolumeSpecName: "kube-api-access-q8nc8") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "kube-api-access-q8nc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.677689 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": container with ID starting with 23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e not found: ID does not exist" containerID="23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.677750 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e"} err="failed to get container status \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": rpc error: code = NotFound desc = could not find container \"23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e\": container with ID starting with 23c5da4a09be36459ef2ef3b673003e33937b08f5a5a89b7b45b8074c830e74e not found: ID does not exist" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.732355 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.746992 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.747036 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8nc8\" (UniqueName: \"kubernetes.io/projected/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-kube-api-access-q8nc8\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.760356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data" (OuterVolumeSpecName: "config-data") pod "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" (UID: "e7e7ef23-9d73-45f9-aeae-9fb0bf16b861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.827256 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50848->10.217.0.196:8775: read: connection reset by peer" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.827645 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50842->10.217.0.196:8775: read: connection reset by peer" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.848383 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.937052 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.954255 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.962716 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.964261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964358 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: E0130 16:46:32.964626 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964687 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964931 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" containerName="nova-manage" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.964992 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" containerName="nova-scheduler-scheduler" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.965613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.968001 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:46:32 crc kubenswrapper[4766]: I0130 16:46:32.998921 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052164 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.052619 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157215 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157380 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.157465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.162873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.166906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.176460 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"nova-scheduler-0\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.288263 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.299196 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.360982 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.361024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") pod \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\" (UID: \"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.361537 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs" (OuterVolumeSpecName: "logs") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.375246 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx" (OuterVolumeSpecName: "kube-api-access-mf4cx") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "kube-api-access-mf4cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.416557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data" (OuterVolumeSpecName: "config-data") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.420418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.454860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" (UID: "8e30ce0e-ad1f-4433-91ba-3a19be83ffc9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463517 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463565 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463577 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463588 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf4cx\" (UniqueName: \"kubernetes.io/projected/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-kube-api-access-mf4cx\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.463599 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.555711 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604129 4766 generic.go:334] "Generic (PLEG): container finished" podID="23e893e4-3d60-421d-ad41-bc0f76112015" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" exitCode=0 Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604202 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23e893e4-3d60-421d-ad41-bc0f76112015","Type":"ContainerDied","Data":"1a67e71d5d71a3f934c66c454b741f0b3ac1c9d352fcd86ce01318614ddc8465"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604270 4766 scope.go:117] "RemoveContainer" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.604656 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627330 4766 generic.go:334] "Generic (PLEG): container finished" podID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" exitCode=0 Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627378 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627409 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e30ce0e-ad1f-4433-91ba-3a19be83ffc9","Type":"ContainerDied","Data":"00d52323719cdcf153e25b7a1622f149993ee5f6d853ba11e47ebf2bd0e4a738"} Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.627425 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667084 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667173 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667223 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667284 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.667867 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs" (OuterVolumeSpecName: "logs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.668152 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") pod \"23e893e4-3d60-421d-ad41-bc0f76112015\" (UID: \"23e893e4-3d60-421d-ad41-bc0f76112015\") " Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.668786 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23e893e4-3d60-421d-ad41-bc0f76112015-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.671618 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t" (OuterVolumeSpecName: "kube-api-access-fxs9t") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "kube-api-access-fxs9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.719998 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data" (OuterVolumeSpecName: "config-data") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.737588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.741787 4766 scope.go:117] "RemoveContainer" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.749342 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.754441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.763817 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764280 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764307 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764327 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764335 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764352 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764361 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.764384 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764391 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764548 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-metadata" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764565 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" containerName="nova-metadata-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764577 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-api" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.764591 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" containerName="nova-api-log" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.765592 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.773809 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.774199 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.820464 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826316 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826457 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.826471 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxs9t\" (UniqueName: \"kubernetes.io/projected/23e893e4-3d60-421d-ad41-bc0f76112015-kube-api-access-fxs9t\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.839512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.859426 4766 scope.go:117] "RemoveContainer" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.864583 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": container with ID starting with 4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3 not found: ID does not exist" containerID="4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.864624 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3"} err="failed to get container status \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": rpc error: code = NotFound desc = could not find container \"4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3\": container with ID starting with 4c788f0562704bc4dfadcb722798403e7930e6f7adcdcde22c3c13a6df3f41a3 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.864652 4766 scope.go:117] "RemoveContainer" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.865561 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": container with ID starting with 5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65 not found: ID does not exist" containerID="5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.865598 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65"} err="failed to get container status \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": rpc error: code = NotFound desc = could not find container \"5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65\": container with ID starting with 5ccdbaf3bb96dfd8169ab0fe592d26c7a0d1efe43b5c08bbc7390fe652947a65 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.865614 4766 scope.go:117] "RemoveContainer" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.873859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "23e893e4-3d60-421d-ad41-bc0f76112015" (UID: "23e893e4-3d60-421d-ad41-bc0f76112015"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:46:33 crc kubenswrapper[4766]: W0130 16:46:33.873511 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f217490_8a26_4f4b_935b_fe5918500948.slice/crio-f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b WatchSource:0}: Error finding container f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b: Status 404 returned error can't find the container with id f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.883259 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.904056 4766 scope.go:117] "RemoveContainer" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929165 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929246 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929291 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929398 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.929410 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e893e4-3d60-421d-ad41-bc0f76112015-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937265 4766 scope.go:117] "RemoveContainer" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.937815 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": container with ID starting with f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80 not found: ID does not exist" containerID="f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937846 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80"} err="failed to get container status \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": rpc error: code = NotFound desc = could not find container \"f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80\": container with ID starting with f33cca9b31b0707055cc7d972b3ac113dfb61b700ced61c9579110d866526c80 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.937870 4766 scope.go:117] "RemoveContainer" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: E0130 16:46:33.938092 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": container with ID starting with 817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241 not found: ID does not exist" containerID="817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.938126 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241"} err="failed to get container status \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": rpc error: code = NotFound desc = could not find container \"817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241\": container with ID starting with 817ef08566cd27d90ced8fe5b2ae3d2003a5823e02c0a237d13eebcc353ba241 not found: ID does not exist" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.946220 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.960862 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.974383 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.976105 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.977886 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.978685 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.979080 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:46:33 crc kubenswrapper[4766]: I0130 16:46:33.986139 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030420 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030748 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.030928 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.031641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.035713 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.035724 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.036419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.050212 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"nova-metadata-0\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.053829 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e893e4-3d60-421d-ad41-bc0f76112015" path="/var/lib/kubelet/pods/23e893e4-3d60-421d-ad41-bc0f76112015/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.056317 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e30ce0e-ad1f-4433-91ba-3a19be83ffc9" path="/var/lib/kubelet/pods/8e30ce0e-ad1f-4433-91ba-3a19be83ffc9/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.057034 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e7ef23-9d73-45f9-aeae-9fb0bf16b861" path="/var/lib/kubelet/pods/e7e7ef23-9d73-45f9-aeae-9fb0bf16b861/volumes" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133015 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133276 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.133356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.151985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235117 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.235717 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.239599 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.247172 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.247692 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.249151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.252269 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.254782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"nova-api-0\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " pod="openstack/nova-api-0" Jan 30 16:46:34 crc kubenswrapper[4766]: I0130 16:46:34.306528 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.641616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerStarted","Data":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.641659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerStarted","Data":"f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.672206 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.674670 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.674653491 podStartE2EDuration="2.674653491s" podCreationTimestamp="2026-01-30 16:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:34.655931798 +0000 UTC m=+1449.293889154" watchObservedRunningTime="2026-01-30 16:46:34.674653491 +0000 UTC m=+1449.312610837" Jan 30 16:46:35 crc kubenswrapper[4766]: W0130 16:46:34.798731 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ae2453_74fa_4114_9261_21b381518493.slice/crio-7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244 WatchSource:0}: Error finding container 7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244: Status 404 returned error can't find the container with id 7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244 Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:34.800464 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.651847 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerStarted","Data":"fbc4233875c212f4b897d1f9917772ed396cd3598ca0ca808134dccd327aa2de"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653517 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.653556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerStarted","Data":"7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244"} Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.680687 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.680663779 podStartE2EDuration="2.680663779s" podCreationTimestamp="2026-01-30 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:35.670222578 +0000 UTC m=+1450.308179944" watchObservedRunningTime="2026-01-30 16:46:35.680663779 +0000 UTC m=+1450.318621135" Jan 30 16:46:35 crc kubenswrapper[4766]: I0130 16:46:35.693462 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.693443946 podStartE2EDuration="2.693443946s" podCreationTimestamp="2026-01-30 16:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:46:35.691254415 +0000 UTC m=+1450.329211771" watchObservedRunningTime="2026-01-30 16:46:35.693443946 +0000 UTC m=+1450.331401292" Jan 30 16:46:38 crc kubenswrapper[4766]: I0130 16:46:38.288556 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:46:39 crc kubenswrapper[4766]: I0130 16:46:39.152961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:39 crc kubenswrapper[4766]: I0130 16:46:39.153319 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.289421 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.327649 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:46:43 crc kubenswrapper[4766]: I0130 16:46:43.765191 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.153031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.153078 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.307737 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:44 crc kubenswrapper[4766]: I0130 16:46:44.307815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.172314 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.172329 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.321434 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:45 crc kubenswrapper[4766]: I0130 16:46:45.321452 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:46:53 crc kubenswrapper[4766]: I0130 16:46:53.955378 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.159008 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.159111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.166156 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.167462 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.317909 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.319642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.329099 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.335201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.833834 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:46:54 crc kubenswrapper[4766]: I0130 16:46:54.842408 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:46:56 crc kubenswrapper[4766]: I0130 16:46:56.991142 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:56 crc kubenswrapper[4766]: I0130 16:46:56.993529 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.000843 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072580 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072702 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.072789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.174913 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.175350 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.175362 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.200488 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"redhat-operators-6kx5n\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.329297 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:46:57 crc kubenswrapper[4766]: I0130 16:46:57.862338 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878583 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" exitCode=0 Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878898 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14"} Jan 30 16:46:58 crc kubenswrapper[4766]: I0130 16:46:58.878965 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"721b24966425ad3828c4ed010c44283d43a0eeb0f5dae60a2287376c39e4728d"} Jan 30 16:46:59 crc kubenswrapper[4766]: I0130 16:46:59.889000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} Jan 30 16:47:02 crc kubenswrapper[4766]: I0130 16:47:02.918560 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" exitCode=0 Jan 30 16:47:02 crc kubenswrapper[4766]: I0130 16:47:02.918660 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} Jan 30 16:47:03 crc kubenswrapper[4766]: I0130 16:47:03.933245 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerStarted","Data":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} Jan 30 16:47:03 crc kubenswrapper[4766]: I0130 16:47:03.966167 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kx5n" podStartSLOduration=3.4675488789999998 podStartE2EDuration="7.966126771s" podCreationTimestamp="2026-01-30 16:46:56 +0000 UTC" firstStartedPulling="2026-01-30 16:46:58.881012312 +0000 UTC m=+1473.518969658" lastFinishedPulling="2026-01-30 16:47:03.379590194 +0000 UTC m=+1478.017547550" observedRunningTime="2026-01-30 16:47:03.950815174 +0000 UTC m=+1478.588772540" watchObservedRunningTime="2026-01-30 16:47:03.966126771 +0000 UTC m=+1478.604084117" Jan 30 16:47:07 crc kubenswrapper[4766]: I0130 16:47:07.329961 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:07 crc kubenswrapper[4766]: I0130 16:47:07.330322 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:08 crc kubenswrapper[4766]: I0130 16:47:08.375240 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6kx5n" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" probeResult="failure" output=< Jan 30 16:47:08 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 16:47:08 crc kubenswrapper[4766]: > Jan 30 16:47:09 crc kubenswrapper[4766]: I0130 16:47:09.045520 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:47:09 crc kubenswrapper[4766]: I0130 16:47:09.045783 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.420860 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.422504 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.428943 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.500945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.543408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.543560 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.605254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.630812 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-66a8-account-create-update-wk4g8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.645474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.645619 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.646402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.662238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.663655 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.670089 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.683267 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.713251 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.713503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" containerID="cri-o://df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" gracePeriod=2 Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.742228 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.747552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.747601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.791222 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"barbican-66a8-account-create-update-hh2cg\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.854758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.854806 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.855513 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.873609 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.912023 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"root-account-create-update-jfd74\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:13 crc kubenswrapper[4766]: I0130 16:47:13.999264 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jppr8"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.055554 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.061063 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.104294 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3747d6ac-f476-429b-83b8-c5a65a241d47" path="/var/lib/kubelet/pods/3747d6ac-f476-429b-83b8-c5a65a241d47/volumes" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.104961 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9dd82ac-e512-442e-97c4-53be730affca" path="/var/lib/kubelet/pods/e9dd82ac-e512-442e-97c4-53be730affca/volumes" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.121676 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.122085 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122099 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122300 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" containerName="openstackclient" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.122870 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.129599 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.160572 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.164288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.164366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.195255 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.203234 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cc14-account-create-update-jhjn2"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.245905 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.277519 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.277992 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.280115 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.280578 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.281346 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" containerID="cri-o://68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" gracePeriod=300 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.336633 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.337042 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" containerID="cri-o://1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" gracePeriod=30 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.337398 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" containerID="cri-o://722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" gracePeriod=30 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.357618 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"placement-cc14-account-create-update-6kfvc\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.384566 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.384627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:14.884607056 +0000 UTC m=+1489.522564412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.393551 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.465532 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-270a-account-create-update-d5mdk"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.518247 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.537706 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" containerID="cri-o://20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" gracePeriod=300 Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.588281 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.589748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.619801 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.627395 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jpmx7"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.669536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.674636 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.693950 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.694011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.786344 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.787755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.801761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.801877 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.803034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.805528 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.819886 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.831246 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.869230 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b00e-account-create-update-r7p4m"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.886905 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"nova-api-b00e-account-create-update-pkszz\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.896722 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.904603 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.904900 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.905329 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: E0130 16:47:14.905458 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:15.905436678 +0000 UTC m=+1490.543394024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.931119 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-321b-account-create-update-fb9ws"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.962310 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.978486 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rxmkt"] Jan 30 16:47:14 crc kubenswrapper[4766]: I0130 16:47:14.995448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.009480 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.021412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.023106 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.022725 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.035994 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.055697 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mq5sq"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.078564 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-d2bd4"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.104801 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.208436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"nova-cell0-1273-account-create-update-qhttp\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237520 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237878 4766 generic.go:334] "Generic (PLEG): container finished" podID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerID="68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" exitCode=2 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.237902 4766 generic.go:334] "Generic (PLEG): container finished" podID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerID="20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" exitCode=143 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.238001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.238031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.252704 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerID="722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" exitCode=2 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.252773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad"} Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.301385 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zgzf5"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.330484 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.330806 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-rsxl2" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" containerID="cri-o://ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.358748 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.398531 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:15 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:15 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:15 crc kubenswrapper[4766]: else Jan 30 16:47:15 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:15 crc kubenswrapper[4766]: fi Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:15 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:15 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:15 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:15 crc kubenswrapper[4766]: # support updates Jan 30 16:47:15 crc kubenswrapper[4766]: Jan 30 16:47:15 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.400262 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.449963 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.503412 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.551801 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-83af-account-create-update-87kzk"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.608232 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.692255 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.692496 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" containerID="cri-o://961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" gracePeriod=10 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.747319 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.799578 4766 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-clmnh" message="Exiting ovn-controller (1) " Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.799616 4766 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" containerID="cri-o://cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.799648 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-clmnh" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" containerID="cri-o://cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.799783 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sc6rp"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.825291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.826159 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" containerID="cri-o://0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" gracePeriod=300 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.845205 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.854703 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rlpcs"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.871035 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.883196 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.906500 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2sfxl"] Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.916306 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:15 crc kubenswrapper[4766]: E0130 16:47:15.916409 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.916384444 +0000 UTC m=+1492.554341790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.916695 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" containerID="cri-o://35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" gracePeriod=300 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.935499 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.935732 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" containerID="cri-o://e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.936138 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" containerID="cri-o://a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.963666 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.963896 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" containerID="cri-o://ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.964296 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" containerID="cri-o://a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.976249 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.976643 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" containerID="cri-o://a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.977151 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" containerID="cri-o://f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.990684 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": EOF" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.990838 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": EOF" Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994187 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994401 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d8797fb6-zzsfd" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" containerID="cri-o://13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" gracePeriod=30 Jan 30 16:47:15 crc kubenswrapper[4766]: I0130 16:47:15.994799 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d8797fb6-zzsfd" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" containerID="cri-o://e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.015139 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.015676 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" containerID="cri-o://7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.016156 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" containerID="cri-o://7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.030546 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.030627 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:16.530600266 +0000 UTC m=+1491.168557622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.128695 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c69ac66-232c-41b5-95a8-66eeb597bf70" path="/var/lib/kubelet/pods/0c69ac66-232c-41b5-95a8-66eeb597bf70/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.132943 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d" path="/var/lib/kubelet/pods/0ebd673f-7ca2-48b4-a9a9-2fe489cf3a2d/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.133623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10bcd3d7-2c30-4a51-9455-2ffed88a7f43" path="/var/lib/kubelet/pods/10bcd3d7-2c30-4a51-9455-2ffed88a7f43/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.134165 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a05e847-bb50-49ab-821d-e2432c0f01e9" path="/var/lib/kubelet/pods/3a05e847-bb50-49ab-821d-e2432c0f01e9/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.134972 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d1f0ba-d11c-4e08-9e01-5783f42a6b84" path="/var/lib/kubelet/pods/42d1f0ba-d11c-4e08-9e01-5783f42a6b84/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.137110 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc27037-152a-461b-bce1-6d37b38bbb95" path="/var/lib/kubelet/pods/4bc27037-152a-461b-bce1-6d37b38bbb95/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.137831 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75830eb2-571a-4fef-92b5-057b0928cfe0" path="/var/lib/kubelet/pods/75830eb2-571a-4fef-92b5-057b0928cfe0/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.140339 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7639b60e-a348-4203-84b6-68af413cd517" path="/var/lib/kubelet/pods/7639b60e-a348-4203-84b6-68af413cd517/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.142039 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c08adc-cebc-4bff-8994-d8f1f0cb59d7" path="/var/lib/kubelet/pods/83c08adc-cebc-4bff-8994-d8f1f0cb59d7/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.143123 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98478911-5d75-4bba-a256-e1c2c28e56de" path="/var/lib/kubelet/pods/98478911-5d75-4bba-a256-e1c2c28e56de/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.144556 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8b317f-6f81-4ac9-a854-7b71e384ed98" path="/var/lib/kubelet/pods/ad8b317f-6f81-4ac9-a854-7b71e384ed98/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.147542 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c683df85-82ee-4038-883c-c47b3aa46bec" path="/var/lib/kubelet/pods/c683df85-82ee-4038-883c-c47b3aa46bec/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.150458 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db058df5-07b8-4d6e-a646-48ac7105c516" path="/var/lib/kubelet/pods/db058df5-07b8-4d6e-a646-48ac7105c516/volumes" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.153964 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.153998 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154012 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-n8rf4"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154026 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154479 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" containerID="cri-o://374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154792 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" containerID="cri-o://9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154841 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" containerID="cri-o://fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154872 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" containerID="cri-o://1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154902 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" containerID="cri-o://2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154935 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" containerID="cri-o://cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154965 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" containerID="cri-o://686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.154993 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" containerID="cri-o://93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155020 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" containerID="cri-o://ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155051 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" containerID="cri-o://3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155077 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" containerID="cri-o://7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155103 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" containerID="cri-o://4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155130 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" containerID="cri-o://b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155160 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" containerID="cri-o://8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.155222 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" containerID="cri-o://13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.175350 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-x95v6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.186068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.252302 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nwrgq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.283291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.320899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.321195 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" containerID="cri-o://2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.321319 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" containerID="cri-o://7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.328881 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.347372 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.379490 4766 generic.go:334] "Generic (PLEG): container finished" podID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.379654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.382839 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rsxl2_140fa04a-cb22-40ed-a08c-17f4ea13a5c4/openstack-network-exporter/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.382905 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.383261 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zf522"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.386859 4766 generic.go:334] "Generic (PLEG): container finished" podID="372f7d7a-9066-4b9b-884a-5257785ed101" containerID="df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" exitCode=137 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.390413 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.390453 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerID="0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" exitCode=2 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392044 4766 generic.go:334] "Generic (PLEG): container finished" podID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerID="35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392123 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.392150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.395588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jfd74" event={"ID":"4e9bbf1f-b039-4112-ab71-308535065091","Type":"ContainerStarted","Data":"fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.398927 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-jfd74" secret="" err="secret \"galera-openstack-cell1-dockercfg-zd2kf\" not found" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.431651 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:16 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:16 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:16 crc kubenswrapper[4766]: else Jan 30 16:47:16 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:16 crc kubenswrapper[4766]: fi Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:16 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:16 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:16 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:16 crc kubenswrapper[4766]: # support updates Jan 30 16:47:16 crc kubenswrapper[4766]: Jan 30 16:47:16 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.433059 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461737 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461885 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.461977 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.462003 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") pod \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\" (UID: \"140fa04a-cb22-40ed-a08c-17f4ea13a5c4\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.462511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.464824 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.467814 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config" (OuterVolumeSpecName: "config") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.472768 4766 generic.go:334] "Generic (PLEG): container finished" podID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerID="cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" exitCode=0 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.472863 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerDied","Data":"cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.473906 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.473979 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504638 4766 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504677 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.504686 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.545482 4766 generic.go:334] "Generic (PLEG): container finished" podID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.545557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549135 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rsxl2_140fa04a-cb22-40ed-a08c-17f4ea13a5c4/openstack-network-exporter/0.log" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549185 4766 generic.go:334] "Generic (PLEG): container finished" podID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerID="ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" exitCode=2 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549239 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rsxl2" event={"ID":"140fa04a-cb22-40ed-a08c-17f4ea13a5c4","Type":"ContainerDied","Data":"ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549277 4766 scope.go:117] "RemoveContainer" containerID="ca773f6965466e1c966e4078c56699b7af7241f8034d067ce868bbc53f1f1cda" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.549494 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rsxl2" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.563424 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.563803 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" containerID="cri-o://e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.564142 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" containerID="cri-o://f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.586557 4766 generic.go:334] "Generic (PLEG): container finished" podID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerID="961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" exitCode=0 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.586685 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.591119 4766 generic.go:334] "Generic (PLEG): container finished" podID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerID="13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" exitCode=143 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.591304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64"} Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.602798 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.603645 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" containerID="cri-o://7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.605084 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" containerID="cri-o://078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.607983 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608142 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608193 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608405 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608497 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.608991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\" (UID: \"1e751b80-d475-4bfd-a382-5d9e1618e5aa\") " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.610738 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4" (OuterVolumeSpecName: "kube-api-access-zh9x4") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "kube-api-access-zh9x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.613006 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config" (OuterVolumeSpecName: "config") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.613587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.637999 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts" (OuterVolumeSpecName: "scripts") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.664586 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.664658 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.16463952 +0000 UTC m=+1491.802596866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665270 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665588 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.665605 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh9x4\" (UniqueName: \"kubernetes.io/projected/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-kube-api-access-zh9x4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669466 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669500 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.669511 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e751b80-d475-4bfd-a382-5d9e1618e5aa-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.671012 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: E0130 16:47:16.671116 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:17.671084391 +0000 UTC m=+1492.309041737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.679117 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.686101 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc" (OuterVolumeSpecName: "kube-api-access-85mnc") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "kube-api-access-85mnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.702322 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.771773 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.771804 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mnc\" (UniqueName: \"kubernetes.io/projected/1e751b80-d475-4bfd-a382-5d9e1618e5aa-kube-api-access-85mnc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.775361 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dksnn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.796972 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.802356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818249 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818502 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" containerID="cri-o://712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.818925 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" containerID="cri-o://812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.834062 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.843512 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2h7p2"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.850787 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.854603 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" containerID="cri-o://83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" gracePeriod=29 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.862617 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.862897 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" containerID="cri-o://929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.863498 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6c45fdd9-srlkx" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" containerID="cri-o://e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.874070 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.874096 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.884003 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.892239 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" containerID="cri-o://087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" gracePeriod=29 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.893509 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-smswb"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.900573 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.908345 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8mgkl"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.920379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.931845 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-63c5-account-create-update-sx7bq"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.949776 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.954820 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" containerID="cri-o://068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.955849 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7d7d659cc9-88mc9" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" containerID="cri-o://75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" gracePeriod=30 Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.960120 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.970122 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.979713 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.981919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "140fa04a-cb22-40ed-a08c-17f4ea13a5c4" (UID: "140fa04a-cb22-40ed-a08c-17f4ea13a5c4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:16 crc kubenswrapper[4766]: I0130 16:47:16.986854 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" containerID="cri-o://c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.004867 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "barbican" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="barbican" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.007584 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-66a8-account-create-update-hh2cg" podUID="d12bc030-c731-4999-ac6d-1be59807c6de" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.032788 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" containerID="cri-o://b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.043102 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.060267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.064046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1e751b80-d475-4bfd-a382-5d9e1618e5aa" (UID: "1e751b80-d475-4bfd-a382-5d9e1618e5aa"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.065923 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" containerID="cri-o://db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" gracePeriod=604800 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092673 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092721 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/140fa04a-cb22-40ed-a08c-17f4ea13a5c4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.092734 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e751b80-d475-4bfd-a382-5d9e1618e5aa-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.097967 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pq28c"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.161034 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.197552 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.197621 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:18.197600142 +0000 UTC m=+1492.835557488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.205976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.206610 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.253055 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.272666 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.272942 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" containerID="cri-o://49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.282390 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.288802 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.289030 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" containerID="cri-o://7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.299909 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d5p85"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.307294 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.310456 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.314662 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.317734 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.317874 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.324831 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.327442 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.327512 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.330255 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsc6g"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.350238 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.381036 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.402375 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.416696 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-rsxl2"] Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.420919 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "nova_cell0" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="nova_cell0" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.422996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423052 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423082 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423104 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423159 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423290 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423315 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423341 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423374 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423397 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423417 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423489 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423527 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") pod \"372f7d7a-9066-4b9b-884a-5257785ed101\" (UID: \"372f7d7a-9066-4b9b-884a-5257785ed101\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423573 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423674 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423730 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") pod \"dc575168-b373-41ba-9dd6-2d9d168a6527\" (UID: \"dc575168-b373-41ba-9dd6-2d9d168a6527\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") pod \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\" (UID: \"c4c6022b-f99b-41de-8048-ac8e4c4fa68f\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.423797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") pod \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\" (UID: \"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9\") " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.424374 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run" (OuterVolumeSpecName: "var-run") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.425264 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.426018 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts" (OuterVolumeSpecName: "scripts") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.427961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.428321 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config" (OuterVolumeSpecName: "config") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.428827 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.429580 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q" (OuterVolumeSpecName: "kube-api-access-g4q2q") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "kube-api-access-g4q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.431350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts" (OuterVolumeSpecName: "scripts") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.433418 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-1273-account-create-update-qhttp" podUID="4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.433562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.460682 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn" (OuterVolumeSpecName: "kube-api-access-dtgwn") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "kube-api-access-dtgwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.462723 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8" (OuterVolumeSpecName: "kube-api-access-8d4d8") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "kube-api-access-8d4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.466943 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" containerID="cri-o://40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" gracePeriod=604800 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.469430 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.471610 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "nova_api" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="nova_api" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.472766 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-b00e-account-create-update-pkszz" podUID="965e8a8f-b4eb-4abb-8177-841fde4d33a2" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.507510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n" (OuterVolumeSpecName: "kube-api-access-hvm4n") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "kube-api-access-hvm4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.507589 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526749 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526771 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526797 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526807 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526816 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4q2q\" (UniqueName: \"kubernetes.io/projected/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-kube-api-access-g4q2q\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526828 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526837 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtgwn\" (UniqueName: \"kubernetes.io/projected/dc575168-b373-41ba-9dd6-2d9d168a6527-kube-api-access-dtgwn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvm4n\" (UniqueName: \"kubernetes.io/projected/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-kube-api-access-hvm4n\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526854 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526865 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4d8\" (UniqueName: \"kubernetes.io/projected/372f7d7a-9066-4b9b-884a-5257785ed101-kube-api-access-8d4d8\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526876 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.526884 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.535233 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" containerID="cri-o://83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" gracePeriod=30 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.537166 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.554972 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.559922 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.572099 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.574729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.610773 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.629528 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:17 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: if [ -n "placement" ]; then Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="placement" Jan 30 16:47:17 crc kubenswrapper[4766]: else Jan 30 16:47:17 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:17 crc kubenswrapper[4766]: fi Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:17 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:17 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:17 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:17 crc kubenswrapper[4766]: # support updates Jan 30 16:47:17 crc kubenswrapper[4766]: Jan 30 16:47:17 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629881 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629902 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.629912 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.630626 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-cc14-account-create-update-6kfvc" podUID="a5ce540c-4925-43fa-b0aa-ef474912f60e" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.630834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config" (OuterVolumeSpecName: "config") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.636978 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.636793 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.642263 4766 generic.go:334] "Generic (PLEG): container finished" podID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.642340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.650849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.659849 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerID="7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.659921 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.663677 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.670389 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.671095 4766 scope.go:117] "RemoveContainer" containerID="df788f30600005e9bd630dc70c223ed28619ad8b7870fd3b9815867378945be2" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.671260 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.680072 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": dial tcp 10.217.0.195:6080: connect: connection refused" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.681364 4766 generic.go:334] "Generic (PLEG): container finished" podID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.681420 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.686973 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690456 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4c6022b-f99b-41de-8048-ac8e4c4fa68f/ovsdbserver-sb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690550 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4c6022b-f99b-41de-8048-ac8e4c4fa68f","Type":"ContainerDied","Data":"44d944c146c567ab0a586afa23a8e30b46436b5558ae7e1ed7aeb15de65469a1"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.690659 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.696071 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.709555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-pkszz" event={"ID":"965e8a8f-b4eb-4abb-8177-841fde4d33a2","Type":"ContainerStarted","Data":"07444bdec33060f75bafa2f5ef1ef7ed7a4bfb753db474b6ac639a173646884f"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.714503 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6d4bdf9c45-5nxgr" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.163:9696/\": read tcp 10.217.0.2:45420->10.217.0.163:9696: read: connection reset by peer" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.719480 4766 generic.go:334] "Generic (PLEG): container finished" podID="14ae2453-74fa-4114-9261-21b381518493" containerID="7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.719546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.732443 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.732518 4766 scope.go:117] "RemoveContainer" containerID="0e83e4f15db60d1d22bf2322b23168b3c373a79d29a5171d8b43db0aa0812d3a" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.745711 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "372f7d7a-9066-4b9b-884a-5257785ed101" (UID: "372f7d7a-9066-4b9b-884a-5257785ed101"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.746606 4766 generic.go:334] "Generic (PLEG): container finished" podID="d13e6f63-37d4-4780-9902-430a9669901c" containerID="929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.746703 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.751237 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.774958 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.775881 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776270 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/372f7d7a-9066-4b9b-884a-5257785ed101-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776523 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.776890 4766 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.783784 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.752729 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.785146 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:19.785118517 +0000 UTC m=+1494.423075863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.784414 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.758996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-hh2cg" event={"ID":"d12bc030-c731-4999-ac6d-1be59807c6de","Type":"ContainerStarted","Data":"51966cd3a843232e24ea290a07e04942bd3fc29e3ba863dc709b3486073ad006"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.791991 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-qhttp" event={"ID":"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc","Type":"ContainerStarted","Data":"4eed2095c1e71bf557db6c6c4861ce127a35758cb81e96d8821eff98abbdbbf2"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.825749 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e751b80-d475-4bfd-a382-5d9e1618e5aa/ovsdbserver-nb/0.log" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.825884 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.826703 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e751b80-d475-4bfd-a382-5d9e1618e5aa","Type":"ContainerDied","Data":"e1760b87e9caefe6e9c0ac6d3d9d8457bd91e81888eeb4755458d5a683cbea69"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.849971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c4c6022b-f99b-41de-8048-ac8e4c4fa68f" (UID: "c4c6022b-f99b-41de-8048-ac8e4c4fa68f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.878814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" event={"ID":"dc575168-b373-41ba-9dd6-2d9d168a6527","Type":"ContainerDied","Data":"5f22f70a639fc1a3de1e29c0cbaf53974c923905b26e7700e024e4f93619bae6"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.878911 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-zcjhs" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.886330 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc575168-b373-41ba-9dd6-2d9d168a6527" (UID: "dc575168-b373-41ba-9dd6-2d9d168a6527"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887496 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887789 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc575168-b373-41ba-9dd6-2d9d168a6527-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.887800 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c6022b-f99b-41de-8048-ac8e4c4fa68f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.908038 4766 generic.go:334] "Generic (PLEG): container finished" podID="22d60b44-40c9-425e-8daf-8931a25954e0" containerID="712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" exitCode=143 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.908116 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.936546 4766 generic.go:334] "Generic (PLEG): container finished" podID="063ebe65-0175-443e-8c75-5018c42b3f36" containerID="a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.936634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.940989 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" (UID: "eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.969494 4766 generic.go:334] "Generic (PLEG): container finished" podID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" exitCode=0 Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.969583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.990076 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.990211 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: E0130 16:47:17.990272 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.990254279 +0000 UTC m=+1496.628211625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:17 crc kubenswrapper[4766]: I0130 16:47:17.992633 4766 scope.go:117] "RemoveContainer" containerID="35c50dacc5fd194e0367ec397b84d1ebda25e534558fb6144d3b0aa1f4575270" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029852 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029884 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029891 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029905 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029914 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029920 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029927 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029933 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029939 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029946 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029954 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029961 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029967 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.029974 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" exitCode=0 Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030077 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030095 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030115 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030138 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030159 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.030170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3"} Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.032989 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-jfd74" secret="" err="secret \"galera-openstack-cell1-dockercfg-zd2kf\" not found" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.033375 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-clmnh" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.040048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-clmnh" event={"ID":"eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9","Type":"ContainerDied","Data":"35bff03af4700c59de26d7f263ff6609c1c1e4962e327e55accdbc5ea2056c14"} Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.043430 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:18 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:18 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:18 crc kubenswrapper[4766]: else Jan 30 16:47:18 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:18 crc kubenswrapper[4766]: fi Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:18 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:18 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:18 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:18 crc kubenswrapper[4766]: # support updates Jan 30 16:47:18 crc kubenswrapper[4766]: Jan 30 16:47:18 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.044572 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-jfd74" podUID="4e9bbf1f-b039-4112-ab71-308535065091" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.074066 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ab95d5-fb83-42b1-a38b-9e3bb8916f37" path="/var/lib/kubelet/pods/12ab95d5-fb83-42b1-a38b-9e3bb8916f37/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.074640 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" path="/var/lib/kubelet/pods/140fa04a-cb22-40ed-a08c-17f4ea13a5c4/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.075553 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199b8ae3-05c7-4785-9590-1cb06cce0013" path="/var/lib/kubelet/pods/199b8ae3-05c7-4785-9590-1cb06cce0013/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.076060 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1caad6ca-26a4-488c-8b03-90da40a955b0" path="/var/lib/kubelet/pods/1caad6ca-26a4-488c-8b03-90da40a955b0/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.076591 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372f7d7a-9066-4b9b-884a-5257785ed101" path="/var/lib/kubelet/pods/372f7d7a-9066-4b9b-884a-5257785ed101/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.077558 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8af029-8432-4152-8e74-5c40d72636d7" path="/var/lib/kubelet/pods/4c8af029-8432-4152-8e74-5c40d72636d7/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.078099 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574fc4f9-56c3-44bf-bb85-26bb97a23ddc" path="/var/lib/kubelet/pods/574fc4f9-56c3-44bf-bb85-26bb97a23ddc/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.078709 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da00370-0819-4857-8fa3-1ffe3e6b628b" path="/var/lib/kubelet/pods/6da00370-0819-4857-8fa3-1ffe3e6b628b/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.079723 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d680b3-ced9-4a2a-9a50-780e6239b4a5" path="/var/lib/kubelet/pods/81d680b3-ced9-4a2a-9a50-780e6239b4a5/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.080270 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb52775-c639-4afc-9f21-f33531a854b3" path="/var/lib/kubelet/pods/acb52775-c639-4afc-9f21-f33531a854b3/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.080767 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb40512-6ec4-4dd4-a623-ed2232387ee3" path="/var/lib/kubelet/pods/aeb40512-6ec4-4dd4-a623-ed2232387ee3/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.081800 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88e4495-e013-4fc2-b65b-c3d914b89dd8" path="/var/lib/kubelet/pods/b88e4495-e013-4fc2-b65b-c3d914b89dd8/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.082331 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea24037-4775-49f8-8a3b-d194ea750544" path="/var/lib/kubelet/pods/cea24037-4775-49f8-8a3b-d194ea750544/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.082821 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d707ae8a-f650-48e3-87e8-dc79076433e4" path="/var/lib/kubelet/pods/d707ae8a-f650-48e3-87e8-dc79076433e4/volumes" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.153333 4766 scope.go:117] "RemoveContainer" containerID="68be686c2198473cf235baf71f611a27995c8888c56e86a3626a67b42470e28a" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.188304 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.213189 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.216911 4766 scope.go:117] "RemoveContainer" containerID="20e080fafb462224d035f80d6933976aeeea05d7d2ed407630e50efdc1f07cd7" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.220090 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.255209 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.257785 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.267247 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-clmnh"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.287791 4766 scope.go:117] "RemoveContainer" containerID="961c44998094a56223784b55dc0a705b3ed88b437f07fbb4bb63251127202310" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.296209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") pod \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.296361 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") pod \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\" (UID: \"965e8a8f-b4eb-4abb-8177-841fde4d33a2\") " Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.296912 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.296976 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.296959008 +0000 UTC m=+1494.934916354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.299642 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "965e8a8f-b4eb-4abb-8177-841fde4d33a2" (UID: "965e8a8f-b4eb-4abb-8177-841fde4d33a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.299696 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.313219 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.317014 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.320346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828" (OuterVolumeSpecName: "kube-api-access-bw828") pod "965e8a8f-b4eb-4abb-8177-841fde4d33a2" (UID: "965e8a8f-b4eb-4abb-8177-841fde4d33a2"). InnerVolumeSpecName "kube-api-access-bw828". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.337616 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.345021 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.345345 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.349259 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.358482 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-zcjhs"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.361544 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.363416 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.383432 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384439 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384563 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.384583 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384597 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.384604 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384592 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.384612 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385337 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385403 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385417 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="init" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385468 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="init" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385500 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385508 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.385523 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.385530 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386152 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386171 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386201 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386211 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.386225 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386514 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-httpd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.386986 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerName="proxy-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387002 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" containerName="ovn-controller" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387014 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="ovsdbserver-nb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387025 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387040 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" containerName="dnsmasq-dns" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387054 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387073 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" containerName="ovsdbserver-sb" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387087 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="140fa04a-cb22-40ed-a08c-17f4ea13a5c4" containerName="openstack-network-exporter" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.387800 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.399854 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.402940 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403110 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403130 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403190 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403263 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403289 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") pod \"c3997cdc-9abd-4aa3-9201-0015456d4750\" (UID: \"c3997cdc-9abd-4aa3-9201-0015456d4750\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403809 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw828\" (UniqueName: \"kubernetes.io/projected/965e8a8f-b4eb-4abb-8177-841fde4d33a2-kube-api-access-bw828\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.403821 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965e8a8f-b4eb-4abb-8177-841fde4d33a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.405675 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.417135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.419892 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.432007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.432465 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts" (OuterVolumeSpecName: "kube-api-access-7dsts") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "kube-api-access-7dsts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.465937 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.471492 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.502530 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.505660 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") pod \"d12bc030-c731-4999-ac6d-1be59807c6de\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.505728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") pod \"d12bc030-c731-4999-ac6d-1be59807c6de\" (UID: \"d12bc030-c731-4999-ac6d-1be59807c6de\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506110 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506226 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506343 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506355 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506366 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dsts\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-kube-api-access-7dsts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506378 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3997cdc-9abd-4aa3-9201-0015456d4750-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.506388 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3997cdc-9abd-4aa3-9201-0015456d4750-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.508105 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d12bc030-c731-4999-ac6d-1be59807c6de" (UID: "d12bc030-c731-4999-ac6d-1be59807c6de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.509861 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data" (OuterVolumeSpecName: "config-data") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.509933 4766 scope.go:117] "RemoveContainer" containerID="171794ba587c014be0b798dbd63a837f1e8d0b0b80d5e7da01caed534045c23e" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.516522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk" (OuterVolumeSpecName: "kube-api-access-rhtqk") pod "d12bc030-c731-4999-ac6d-1be59807c6de" (UID: "d12bc030-c731-4999-ac6d-1be59807c6de"). InnerVolumeSpecName "kube-api-access-rhtqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.524782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.549320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3997cdc-9abd-4aa3-9201-0015456d4750" (UID: "c3997cdc-9abd-4aa3-9201-0015456d4750"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.562698 4766 scope.go:117] "RemoveContainer" containerID="cc06e17c8227a3be8709faf659e52c8b8081ab19b313069647e67f5a0b8b13e7" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.581439 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.608850 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") pod \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609163 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") pod \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\" (UID: \"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609761 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609818 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609831 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609845 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3997cdc-9abd-4aa3-9201-0015456d4750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609857 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhtqk\" (UniqueName: \"kubernetes.io/projected/d12bc030-c731-4999-ac6d-1be59807c6de-kube-api-access-rhtqk\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.609870 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d12bc030-c731-4999-ac6d-1be59807c6de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.610435 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" (UID: "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.610577 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.617745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn" (OuterVolumeSpecName: "kube-api-access-dfxsn") pod "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" (UID: "4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc"). InnerVolumeSpecName "kube-api-access-dfxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.645721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"root-account-create-update-zlndr\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711381 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711740 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.711795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") pod \"2852c370-2b06-4a98-9d48-190ed09dc7fb\" (UID: \"2852c370-2b06-4a98-9d48-190ed09dc7fb\") " Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.714342 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.714380 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfxsn\" (UniqueName: \"kubernetes.io/projected/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc-kube-api-access-dfxsn\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.739942 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.746004 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.746167 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.750818 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.750981 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.751011 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.750823 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data" (OuterVolumeSpecName: "config-data") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.760385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd" (OuterVolumeSpecName: "kube-api-access-plzmd") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "kube-api-access-plzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.762505 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.768463 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:18 crc kubenswrapper[4766]: E0130 16:47:18.768520 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.819235 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzmd\" (UniqueName: \"kubernetes.io/projected/2852c370-2b06-4a98-9d48-190ed09dc7fb-kube-api-access-plzmd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.819540 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:18 crc kubenswrapper[4766]: I0130 16:47:18.820412 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.020967 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.022520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.060911 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.087364 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.102514 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.102763 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.108370 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.121403 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-66a8-account-create-update-hh2cg" event={"ID":"d12bc030-c731-4999-ac6d-1be59807c6de","Type":"ContainerDied","Data":"51966cd3a843232e24ea290a07e04942bd3fc29e3ba863dc709b3486073ad006"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.121539 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-66a8-account-create-update-hh2cg" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.125267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "2852c370-2b06-4a98-9d48-190ed09dc7fb" (UID: "2852c370-2b06-4a98-9d48-190ed09dc7fb"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.130254 4766 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.130281 4766 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2852c370-2b06-4a98-9d48-190ed09dc7fb-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.133308 4766 generic.go:334] "Generic (PLEG): container finished" podID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerID="7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.133411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162133 4766 generic.go:334] "Generic (PLEG): container finished" podID="c3997cdc-9abd-4aa3-9201-0015456d4750" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162271 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162307 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d7d659cc9-88mc9" event={"ID":"c3997cdc-9abd-4aa3-9201-0015456d4750","Type":"ContainerDied","Data":"49605357677b39efe33a4677710b6828509af2272af5c0ba35f1272ec2a825ae"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162324 4766 scope.go:117] "RemoveContainer" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.162462 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d7d659cc9-88mc9" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174228 4766 generic.go:334] "Generic (PLEG): container finished" podID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerDied","Data":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.174384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2852c370-2b06-4a98-9d48-190ed09dc7fb","Type":"ContainerDied","Data":"e3f1207851f51fa77618a8f4520c72390b14e22e1338691737d047661159f41f"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.177301 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.208821 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.213655 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-66a8-account-create-update-hh2cg"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.215393 4766 scope.go:117] "RemoveContainer" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.223905 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-6kfvc" event={"ID":"a5ce540c-4925-43fa-b0aa-ef474912f60e","Type":"ContainerStarted","Data":"4fd65f8ecd2b6f82a377e2d07f913ddeac5bcdf9496f8b1aeada1b9cd5e4251c"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.234134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.235508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.236359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") pod \"e5346df4-67e7-4a20-bb56-11173908a334\" (UID: \"e5346df4-67e7-4a20-bb56-11173908a334\") " Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.254516 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm" (OuterVolumeSpecName: "kube-api-access-wpsfm") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "kube-api-access-wpsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.257363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1273-account-create-update-qhttp" event={"ID":"4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc","Type":"ContainerDied","Data":"4eed2095c1e71bf557db6c6c4861ce127a35758cb81e96d8821eff98abbdbbf2"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.257459 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1273-account-create-update-qhttp" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.262416 4766 scope.go:117] "RemoveContainer" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.266823 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.267210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b00e-account-create-update-pkszz" event={"ID":"965e8a8f-b4eb-4abb-8177-841fde4d33a2","Type":"ContainerDied","Data":"07444bdec33060f75bafa2f5ef1ef7ed7a4bfb753db474b6ac639a173646884f"} Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.268335 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": container with ID starting with 75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a not found: ID does not exist" containerID="75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.269048 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a"} err="failed to get container status \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": rpc error: code = NotFound desc = could not find container \"75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a\": container with ID starting with 75ae716b873f99b1ddf625fab6e52abbd22acfb11f4be4e5163116f6fbbe7e1a not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.269653 4766 scope.go:117] "RemoveContainer" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.268446 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b00e-account-create-update-pkszz" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.270456 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": container with ID starting with 068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350 not found: ID does not exist" containerID="068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.270498 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350"} err="failed to get container status \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": rpc error: code = NotFound desc = could not find container \"068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350\": container with ID starting with 068d2554e4761fa5c6a952a00e96bf35b9eae89ff6aa26d9e05f71a814e8b350 not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.270529 4766 scope.go:117] "RemoveContainer" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271666 4766 generic.go:334] "Generic (PLEG): container finished" podID="e5346df4-67e7-4a20-bb56-11173908a334" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" exitCode=0 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerDied","Data":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.271771 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5346df4-67e7-4a20-bb56-11173908a334","Type":"ContainerDied","Data":"33febc3f7d219c782652c5547871f0fec7686207e6742c6b6d2b0ff232b61a09"} Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.272059 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.273674 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-7d7d659cc9-88mc9"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.277678 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.278914 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kx5n" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" containerID="cri-o://8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" gracePeriod=2 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.317385 4766 scope.go:117] "RemoveContainer" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.317964 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data" (OuterVolumeSpecName: "config-data") pod "e5346df4-67e7-4a20-bb56-11173908a334" (UID: "e5346df4-67e7-4a20-bb56-11173908a334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.333807 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": container with ID starting with 2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1 not found: ID does not exist" containerID="2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.333866 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1"} err="failed to get container status \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": rpc error: code = NotFound desc = could not find container \"2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1\": container with ID starting with 2671e048e8f586ac30e84753d3da2378fb3c49f9c4e105e7b943738d2cc3d2c1 not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.333899 4766 scope.go:117] "RemoveContainer" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344511 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344553 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsfm\" (UniqueName: \"kubernetes.io/projected/e5346df4-67e7-4a20-bb56-11173908a334-kube-api-access-wpsfm\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.344566 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5346df4-67e7-4a20-bb56-11173908a334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.399955 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.417040 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.419015 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.420782 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1273-account-create-update-qhttp"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.421152 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.424874 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.429554 4766 scope.go:117] "RemoveContainer" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.430961 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": container with ID starting with f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf not found: ID does not exist" containerID="f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.431003 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf"} err="failed to get container status \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": rpc error: code = NotFound desc = could not find container \"f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf\": container with ID starting with f89cf49e9f838960fe8746366d6f0b6e8a301a5e9f763e18897855dbba68d6bf not found: ID does not exist" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.460171 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.476930 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b00e-account-create-update-pkszz"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.498798 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.505529 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.619929 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.628753 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.739490 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.739782 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" containerID="cri-o://1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740507 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" containerID="cri-o://3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740816 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" containerID="cri-o://858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.740864 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" containerID="cri-o://69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.773376 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.802684 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.802915 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" containerID="cri-o://b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.873932 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:19 crc kubenswrapper[4766]: E0130 16:47:19.873999 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.873984701 +0000 UTC m=+1498.511942047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.938085 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.938359 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" containerID="cri-o://7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" gracePeriod=30 Jan 30 16:47:19 crc kubenswrapper[4766]: I0130 16:47:19.968869 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.000261 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e3be-account-create-update-n7qg6"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.016686 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016702 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.016732 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016738 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016885 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.016902 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5346df4-67e7-4a20-bb56-11173908a334" containerName="nova-cell0-conductor-conductor" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.017588 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.020907 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.038589 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.080743 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.080865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.103381 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e751b80-d475-4bfd-a382-5d9e1618e5aa" path="/var/lib/kubelet/pods/1e751b80-d475-4bfd-a382-5d9e1618e5aa/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.104568 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2852c370-2b06-4a98-9d48-190ed09dc7fb" path="/var/lib/kubelet/pods/2852c370-2b06-4a98-9d48-190ed09dc7fb/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.105550 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb40e54-43ed-4dd6-8c23-138c01cf062d" path="/var/lib/kubelet/pods/3fb40e54-43ed-4dd6-8c23-138c01cf062d/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.116971 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc" path="/var/lib/kubelet/pods/4f9c7bf1-ef4e-4b2b-806a-2da4b6a26cbc/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.124359 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965e8a8f-b4eb-4abb-8177-841fde4d33a2" path="/var/lib/kubelet/pods/965e8a8f-b4eb-4abb-8177-841fde4d33a2/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.125544 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3997cdc-9abd-4aa3-9201-0015456d4750" path="/var/lib/kubelet/pods/c3997cdc-9abd-4aa3-9201-0015456d4750/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.126612 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c6022b-f99b-41de-8048-ac8e4c4fa68f" path="/var/lib/kubelet/pods/c4c6022b-f99b-41de-8048-ac8e4c4fa68f/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.128079 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12bc030-c731-4999-ac6d-1be59807c6de" path="/var/lib/kubelet/pods/d12bc030-c731-4999-ac6d-1be59807c6de/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.128520 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc575168-b373-41ba-9dd6-2d9d168a6527" path="/var/lib/kubelet/pods/dc575168-b373-41ba-9dd6-2d9d168a6527/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.129264 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5346df4-67e7-4a20-bb56-11173908a334" path="/var/lib/kubelet/pods/e5346df4-67e7-4a20-bb56-11173908a334/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.129844 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9" path="/var/lib/kubelet/pods/eb09ddae-1f3f-4b99-8cd2-7a6beb860bf9/volumes" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149552 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149659 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8p4hm"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149731 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.149986 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7bc6f65df6-mx4xk" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" containerID="cri-o://7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" gracePeriod=30 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.165449 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.166272 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.186413 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.192873 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.192966 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.692941522 +0000 UTC m=+1495.330898868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.193823 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.209910 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:57124->10.217.0.203:8775: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.210040 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:57134->10.217.0.203:8775: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.227218 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.227321 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:20.727290502 +0000 UTC m=+1495.365247848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.299442 4766 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.299520 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts podName:4e9bbf1f-b039-4112-ab71-308535065091 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:24.29950201 +0000 UTC m=+1498.937459356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts") pod "root-account-create-update-jfd74" (UID: "4e9bbf1f-b039-4112-ab71-308535065091") : configmap "openstack-cell1-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.300650 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2jkw8"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.325069 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.332537 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338635 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338859 4766 generic.go:334] "Generic (PLEG): container finished" podID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerID="e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.338891 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae"} Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.348261 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:20 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:20 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:20 crc kubenswrapper[4766]: else Jan 30 16:47:20 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:20 crc kubenswrapper[4766]: fi Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:20 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:20 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:20 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:20 crc kubenswrapper[4766]: # support updates Jan 30 16:47:20 crc kubenswrapper[4766]: Jan 30 16:47:20 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.354623 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zlndr" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.360913 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:49948->10.217.0.156:9311: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.360929 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b946b75c8-zb6q6" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:49952->10.217.0.156:9311: read: connection reset by peer" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364286 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364810 4766 generic.go:334] "Generic (PLEG): container finished" podID="bb576787-90a5-4e81-a047-6fcf37921335" containerID="b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" exitCode=2 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.364982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerDied","Data":"b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.372007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375079 4766 generic.go:334] "Generic (PLEG): container finished" podID="845c3343-246e-4309-bd46-9bcd92cad574" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375140 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375164 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kx5n" event={"ID":"845c3343-246e-4309-bd46-9bcd92cad574","Type":"ContainerDied","Data":"721b24966425ad3828c4ed010c44283d43a0eeb0f5dae60a2287376c39e4728d"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375240 4766 scope.go:117] "RemoveContainer" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375305 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.375434 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.375968 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jn9rq operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e3be-account-create-update-qnsph" podUID="34adc844-a813-4bb0-9d46-131d1b5a7b9b" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.398816 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qdgxb"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401578 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") pod \"a5ce540c-4925-43fa-b0aa-ef474912f60e\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401746 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") pod \"4e9bbf1f-b039-4112-ab71-308535065091\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401789 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401821 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401839 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") pod \"4e9bbf1f-b039-4112-ab71-308535065091\" (UID: \"4e9bbf1f-b039-4112-ab71-308535065091\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401867 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401924 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") pod \"a5ce540c-4925-43fa-b0aa-ef474912f60e\" (UID: \"a5ce540c-4925-43fa-b0aa-ef474912f60e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401958 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.401976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") pod \"845c3343-246e-4309-bd46-9bcd92cad574\" (UID: \"845c3343-246e-4309-bd46-9bcd92cad574\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.402000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") pod \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\" (UID: \"447a8ec3-4e50-40a9-b418-01fd8c0eb03e\") " Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.414590 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs" (OuterVolumeSpecName: "logs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.415428 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities" (OuterVolumeSpecName: "utilities") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.417508 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.417929 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e9bbf1f-b039-4112-ab71-308535065091" (UID: "4e9bbf1f-b039-4112-ab71-308535065091"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.421108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5ce540c-4925-43fa-b0aa-ef474912f60e" (UID: "a5ce540c-4925-43fa-b0aa-ef474912f60e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.423705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8" (OuterVolumeSpecName: "kube-api-access-k9sg8") pod "a5ce540c-4925-43fa-b0aa-ef474912f60e" (UID: "a5ce540c-4925-43fa-b0aa-ef474912f60e"). InnerVolumeSpecName "kube-api-access-k9sg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426486 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426520 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" exitCode=2 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.426675 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.432700 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2" (OuterVolumeSpecName: "kube-api-access-s4dw2") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "kube-api-access-s4dw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.437776 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf" (OuterVolumeSpecName: "kube-api-access-nkrsf") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "kube-api-access-nkrsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443371 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts" (OuterVolumeSpecName: "scripts") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443538 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jfd74" event={"ID":"4e9bbf1f-b039-4112-ab71-308535065091","Type":"ContainerDied","Data":"fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.443648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jfd74" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.446941 4766 scope.go:117] "RemoveContainer" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.447018 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.465157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z" (OuterVolumeSpecName: "kube-api-access-nn85z") pod "4e9bbf1f-b039-4112-ab71-308535065091" (UID: "4e9bbf1f-b039-4112-ab71-308535065091"). InnerVolumeSpecName "kube-api-access-nn85z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.466499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cc14-account-create-update-6kfvc" event={"ID":"a5ce540c-4925-43fa-b0aa-ef474912f60e","Type":"ContainerDied","Data":"4fd65f8ecd2b6f82a377e2d07f913ddeac5bcdf9496f8b1aeada1b9cd5e4251c"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.467151 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cc14-account-create-update-6kfvc" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506589 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ce540c-4925-43fa-b0aa-ef474912f60e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506625 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506636 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506645 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrsf\" (UniqueName: \"kubernetes.io/projected/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-kube-api-access-nkrsf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506656 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506664 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dw2\" (UniqueName: \"kubernetes.io/projected/845c3343-246e-4309-bd46-9bcd92cad574-kube-api-access-s4dw2\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506672 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9sg8\" (UniqueName: \"kubernetes.io/projected/a5ce540c-4925-43fa-b0aa-ef474912f60e-kube-api-access-k9sg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506680 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e9bbf1f-b039-4112-ab71-308535065091-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.506689 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn85z\" (UniqueName: \"kubernetes.io/projected/4e9bbf1f-b039-4112-ab71-308535065091-kube-api-access-nn85z\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.573032 4766 scope.go:117] "RemoveContainer" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.579524 4766 generic.go:334] "Generic (PLEG): container finished" podID="14ae2453-74fa-4114-9261-21b381518493" containerID="078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.579624 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.625831 4766 generic.go:334] "Generic (PLEG): container finished" podID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerID="83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.625928 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.634459 4766 generic.go:334] "Generic (PLEG): container finished" podID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerID="7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" exitCode=0 Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.634516 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab"} Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.642291 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.646774 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.658520 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cc14-account-create-update-6kfvc"] Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.712416 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.712642 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.712777 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.712867 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.712816927 +0000 UTC m=+1496.350774283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.715988 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "845c3343-246e-4309-bd46-9bcd92cad574" (UID: "845c3343-246e-4309-bd46-9bcd92cad574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.735277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.774110 4766 scope.go:117] "RemoveContainer" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.774328 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data" (OuterVolumeSpecName: "config-data") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.775578 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": container with ID starting with 8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7 not found: ID does not exist" containerID="8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.775624 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7"} err="failed to get container status \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": rpc error: code = NotFound desc = could not find container \"8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7\": container with ID starting with 8cbf9b54e98821990af4d26a0270e1083e6e97cddbcd3ce5671c84518d5767d7 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.775652 4766 scope.go:117] "RemoveContainer" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.777243 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": container with ID starting with 07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921 not found: ID does not exist" containerID="07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.777275 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921"} err="failed to get container status \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": rpc error: code = NotFound desc = could not find container \"07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921\": container with ID starting with 07c35af9b71ba10727d5e6edfec8a9dd18621078f9906a2ff5f1d606d8fff921 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.777296 4766 scope.go:117] "RemoveContainer" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.789642 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" containerID="cri-o://aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" gracePeriod=30 Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.789721 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": container with ID starting with 327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14 not found: ID does not exist" containerID="327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.789757 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14"} err="failed to get container status \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": rpc error: code = NotFound desc = could not find container \"327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14\": container with ID starting with 327eb2c72821c900abb686fc57155daefdfa60f38b6a17aeb8f64c6e06c87f14 not found: ID does not exist" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814438 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814454 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845c3343-246e-4309-bd46-9bcd92cad574-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.814466 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.824358 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: E0130 16:47:20.824440 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:21.824416306 +0000 UTC m=+1496.462373652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.905404 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": dial tcp 10.217.0.168:8776: connect: connection refused" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.958584 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:47:20 crc kubenswrapper[4766]: I0130 16:47:20.997588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.016240 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jfd74"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.076863 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "447a8ec3-4e50-40a9-b418-01fd8c0eb03e" (UID: "447a8ec3-4e50-40a9-b418-01fd8c0eb03e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125450 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125562 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125591 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.125976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126035 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126055 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") pod \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\" (UID: \"9ad68dc2-23ff-4044-b74d-149ae8f02bc0\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.126534 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/447a8ec3-4e50-40a9-b418-01fd8c0eb03e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.131327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.132432 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.132887 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.133247 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.144419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz" (OuterVolumeSpecName: "kube-api-access-q47vz") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "kube-api-access-q47vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.148239 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.166572 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e9bbf1f_b039_4112_ab71_308535065091.slice/crio-fca4c05dceea3855589628ff1ebfa551584aedf44b196076f8197c1c533ffe64\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-conmon-812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d5b8a42_39dd_4b1b_9f92_1e3585b6707b.slice/crio-conmon-a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d60b44_40c9_425e_8daf_8931a25954e0.slice/crio-812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.204873 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.227969 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.227993 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228004 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228012 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228021 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228030 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47vz\" (UniqueName: \"kubernetes.io/projected/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-kube-api-access-q47vz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.228038 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.254249 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.265819 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9ad68dc2-23ff-4044-b74d-149ae8f02bc0" (UID: "9ad68dc2-23ff-4044-b74d-149ae8f02bc0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.330859 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.330926 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad68dc2-23ff-4044-b74d-149ae8f02bc0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.415606 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.425834 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.441490 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.487455 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.487556 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.489827 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.491387 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.491433 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541907 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541958 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.541996 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542017 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542103 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542152 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542198 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542236 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542499 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542567 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542590 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") pod \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\" (UID: \"4bc2931b-8439-4c5c-be4d-43f4aab528f2\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") pod \"bb576787-90a5-4e81-a047-6fcf37921335\" (UID: \"bb576787-90a5-4e81-a047-6fcf37921335\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.542639 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.546006 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.549517 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs" (OuterVolumeSpecName: "logs") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.549694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb" (OuterVolumeSpecName: "kube-api-access-r4btb") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "kube-api-access-r4btb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.561422 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.569942 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.575254 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts" (OuterVolumeSpecName: "scripts") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.585918 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48" (OuterVolumeSpecName: "kube-api-access-s2f48") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-api-access-s2f48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.609902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.623716 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.627146 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.632728 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.632805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data" (OuterVolumeSpecName: "config-data") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.638808 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643277 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643675 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643817 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643844 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.643948 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.644008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") pod \"40f1dc52-213f-4a5b-af33-4067a83859e4\" (UID: \"40f1dc52-213f-4a5b-af33-4067a83859e4\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.644054 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") pod \"14ae2453-74fa-4114-9261-21b381518493\" (UID: \"14ae2453-74fa-4114-9261-21b381518493\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646061 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646102 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2f48\" (UniqueName: \"kubernetes.io/projected/bb576787-90a5-4e81-a047-6fcf37921335-kube-api-access-s2f48\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646115 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bc2931b-8439-4c5c-be4d-43f4aab528f2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646127 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4btb\" (UniqueName: \"kubernetes.io/projected/4bc2931b-8439-4c5c-be4d-43f4aab528f2-kube-api-access-r4btb\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646152 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646164 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646191 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646204 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646218 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646233 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.646243 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.648792 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs" (OuterVolumeSpecName: "logs") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.654408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.654578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5" (OuterVolumeSpecName: "kube-api-access-sxzz5") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "kube-api-access-sxzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.661208 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "bb576787-90a5-4e81-a047-6fcf37921335" (UID: "bb576787-90a5-4e81-a047-6fcf37921335"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662587 4766 generic.go:334] "Generic (PLEG): container finished" podID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"aca8dfc0-f915-4696-95c1-3c232f2ea35a","Type":"ContainerDied","Data":"7e89f84a27af28de0ff96a206ea024d02e0721f6cc45b38d9fef889091b6e08b"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662715 4766 scope.go:117] "RemoveContainer" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.662858 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.678834 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs" (OuterVolumeSpecName: "logs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.696291 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv" (OuterVolumeSpecName: "kube-api-access-xqjcv") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "kube-api-access-xqjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.702628 4766 generic.go:334] "Generic (PLEG): container finished" podID="063ebe65-0175-443e-8c75-5018c42b3f36" containerID="e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.702708 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.708500 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.711207 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb576787-90a5-4e81-a047-6fcf37921335","Type":"ContainerDied","Data":"004a4dbb8938c5e8f1cfef5ca99ba208dc91ea1d26f1a6bd59dd513328e8e0c0"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.711323 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.714803 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kx5n" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.718501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data" (OuterVolumeSpecName: "config-data") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.720518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bc2931b-8439-4c5c-be4d-43f4aab528f2","Type":"ContainerDied","Data":"2797b67ea13c41adaa6a8bb781fc530c7226e6d8ca440692aa04b6d42362f33b"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.720640 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.722418 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.728366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.729794 4766 generic.go:334] "Generic (PLEG): container finished" podID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerID="7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.729890 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerDied","Data":"7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.738596 4766 generic.go:334] "Generic (PLEG): container finished" podID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.741513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.742320 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b","Type":"ContainerDied","Data":"a9a6840755fd2b986bdb4ab361591ae6bb5de2cf1574ac6d83650a445bab4f37"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.742479 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.746796 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747357 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747618 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.747905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748427 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748536 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748726 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749064 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749205 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749316 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750012 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750124 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750736 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750839 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751321 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751443 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") pod \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\" (UID: \"6d5b8a42-39dd-4b1b-9f92-1e3585b6707b\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") pod \"17d6e828-fc05-46cb-9bee-bac08ebf331a\" (UID: \"17d6e828-fc05-46cb-9bee-bac08ebf331a\") " Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.751886 4766 scope.go:117] "RemoveContainer" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.752237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14ae2453-74fa-4114-9261-21b381518493","Type":"ContainerDied","Data":"7fc6fabdf1696e6682c7bbb5d9becc2f8e5aa3ed317845b65b7dc17fdb970244"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.748987 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs" (OuterVolumeSpecName: "logs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749092 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs" (OuterVolumeSpecName: "logs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.749136 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.750391 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs" (OuterVolumeSpecName: "logs") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.753574 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.753751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.757088 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.757153 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.757131547 +0000 UTC m=+1498.395088963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766144 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766451 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766471 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzz5\" (UniqueName: \"kubernetes.io/projected/40f1dc52-213f-4a5b-af33-4067a83859e4-kube-api-access-sxzz5\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766488 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766527 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40f1dc52-213f-4a5b-af33-4067a83859e4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766541 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766554 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ae2453-74fa-4114-9261-21b381518493-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766564 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca8dfc0-f915-4696-95c1-3c232f2ea35a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766602 4766 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb576787-90a5-4e81-a047-6fcf37921335-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766770 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766788 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjcv\" (UniqueName: \"kubernetes.io/projected/14ae2453-74fa-4114-9261-21b381518493-kube-api-access-xqjcv\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766871 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17d6e828-fc05-46cb-9bee-bac08ebf331a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766885 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766899 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aca8dfc0-f915-4696-95c1-3c232f2ea35a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766943 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.766959 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.771695 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b" (OuterVolumeSpecName: "kube-api-access-2xl7b") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "kube-api-access-2xl7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772160 4766 generic.go:334] "Generic (PLEG): container finished" podID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772302 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"40f1dc52-213f-4a5b-af33-4067a83859e4","Type":"ContainerDied","Data":"fbc4233875c212f4b897d1f9917772ed396cd3598ca0ca808134dccd327aa2de"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.772369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.775486 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.777150 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781010 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781202 4766 generic.go:334] "Generic (PLEG): container finished" podID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" exitCode=139 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.781354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.800765 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801532 4766 generic.go:334] "Generic (PLEG): container finished" podID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerID="1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801831 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.801964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.802068 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804201 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zlndr" event={"ID":"768238f5-b74e-4f23-91ec-4eeb69375025","Type":"ContainerStarted","Data":"51ffbc2026ffaf4c9f26fd55d50669f8d3b947029fdc717ba29a5acfdc7e97bf"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804630 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts" (OuterVolumeSpecName: "scripts") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.804862 4766 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-zlndr" secret="" err="secret \"galera-openstack-dockercfg-x2qq7\" not found" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.806664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t" (OuterVolumeSpecName: "kube-api-access-69h5t") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "kube-api-access-69h5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.806968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b" (OuterVolumeSpecName: "kube-api-access-dct4b") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "kube-api-access-dct4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.807117 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.814970 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "40f1dc52-213f-4a5b-af33-4067a83859e4" (UID: "40f1dc52-213f-4a5b-af33-4067a83859e4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.815228 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.817845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.817851 4766 generic.go:334] "Generic (PLEG): container finished" podID="d13e6f63-37d4-4780-9902-430a9669901c" containerID="e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.820382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts" (OuterVolumeSpecName: "scripts") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.821834 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:47:21 crc kubenswrapper[4766]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: if [ -n "" ]; then Jan 30 16:47:21 crc kubenswrapper[4766]: GRANT_DATABASE="" Jan 30 16:47:21 crc kubenswrapper[4766]: else Jan 30 16:47:21 crc kubenswrapper[4766]: GRANT_DATABASE="*" Jan 30 16:47:21 crc kubenswrapper[4766]: fi Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: # going for maximum compatibility here: Jan 30 16:47:21 crc kubenswrapper[4766]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 16:47:21 crc kubenswrapper[4766]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 16:47:21 crc kubenswrapper[4766]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 16:47:21 crc kubenswrapper[4766]: # support updates Jan 30 16:47:21 crc kubenswrapper[4766]: Jan 30 16:47:21 crc kubenswrapper[4766]: $MYSQL_CMD < logger="UnhandledError" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.822971 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zlndr" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.823626 4766 generic.go:334] "Generic (PLEG): container finished" podID="22d60b44-40c9-425e-8daf-8931a25954e0" containerID="812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.823672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.824307 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.826393 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d8797fb6-zzsfd" event={"ID":"447a8ec3-4e50-40a9-b418-01fd8c0eb03e","Type":"ContainerDied","Data":"e94bea3a22075449c7ce733d15ed50c31bf49ec686272c0a7961479d9194b9c6"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.826626 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d8797fb6-zzsfd" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.830074 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kx5n"] Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.835350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9ad68dc2-23ff-4044-b74d-149ae8f02bc0","Type":"ContainerDied","Data":"86807e61b818028e1b27b632e251a892f0f024f763279e3a716bc66141f0adc3"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.836066 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841613 4766 generic.go:334] "Generic (PLEG): container finished" podID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" exitCode=0 Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.841695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b946b75c8-zb6q6" event={"ID":"17d6e828-fc05-46cb-9bee-bac08ebf331a","Type":"ContainerDied","Data":"d7ba5e3a0e26b335d6f1850d527c93eb68d9d4d8bfecdec3674d222763957cd0"} Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.850057 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.850269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b946b75c8-zb6q6" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.858424 4766 scope.go:117] "RemoveContainer" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.860892 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": container with ID starting with f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668 not found: ID does not exist" containerID="f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.861227 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668"} err="failed to get container status \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": rpc error: code = NotFound desc = could not find container \"f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668\": container with ID starting with f8cd42c17a2b9677c49633243f1573eb74a7d64e3fcc5f7dba33bde53ccef668 not found: ID does not exist" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.861309 4766 scope.go:117] "RemoveContainer" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.862215 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": container with ID starting with a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832 not found: ID does not exist" containerID="a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.863128 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832"} err="failed to get container status \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": rpc error: code = NotFound desc = could not find container \"a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832\": container with ID starting with a733f4a35556a7749ab7a8a117d06d26d21581815da5c3972ae1bb6be001e832 not found: ID does not exist" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.863204 4766 scope.go:117] "RemoveContainer" containerID="b169f04387ed060fbbaaafe5ea96dd7518c3bc7deab7064d883b932c7d250d26" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869042 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") pod \"keystone-e3be-account-create-update-qnsph\" (UID: \"34adc844-a813-4bb0-9d46-131d1b5a7b9b\") " pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869303 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xl7b\" (UniqueName: \"kubernetes.io/projected/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-kube-api-access-2xl7b\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869320 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869360 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869370 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69h5t\" (UniqueName: \"kubernetes.io/projected/aca8dfc0-f915-4696-95c1-3c232f2ea35a-kube-api-access-69h5t\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869381 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869390 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dct4b\" (UniqueName: \"kubernetes.io/projected/17d6e828-fc05-46cb-9bee-bac08ebf331a-kube-api-access-dct4b\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869398 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869406 4766 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/40f1dc52-213f-4a5b-af33-4067a83859e4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.869416 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.869527 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:22.369489257 +0000 UTC m=+1497.007446603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.869448 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.879528 4766 projected.go:194] Error preparing data for projected volume kube-api-access-jn9rq for pod openstack/keystone-e3be-account-create-update-qnsph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:21 crc kubenswrapper[4766]: E0130 16:47:21.879675 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq podName:34adc844-a813-4bb0-9d46-131d1b5a7b9b nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.879652151 +0000 UTC m=+1498.517609497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn9rq" (UniqueName: "kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq") pod "keystone-e3be-account-create-update-qnsph" (UID: "34adc844-a813-4bb0-9d46-131d1b5a7b9b") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.881397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data" (OuterVolumeSpecName: "config-data") pod "4bc2931b-8439-4c5c-be4d-43f4aab528f2" (UID: "4bc2931b-8439-4c5c-be4d-43f4aab528f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.908357 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "14ae2453-74fa-4114-9261-21b381518493" (UID: "14ae2453-74fa-4114-9261-21b381518493"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.917686 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.955559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.973645 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974441 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974531 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ae2453-74fa-4114-9261-21b381518493-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:21 crc kubenswrapper[4766]: I0130 16:47:21.974645 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc2931b-8439-4c5c-be4d-43f4aab528f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.057943 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.063623 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbf5802-dfa7-4b32-aaa5-48fc779da5d6" path="/var/lib/kubelet/pods/0dbf5802-dfa7-4b32-aaa5-48fc779da5d6/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.064220 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data" (OuterVolumeSpecName: "config-data") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.065857 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9bbf1f-b039-4112-ab71-308535065091" path="/var/lib/kubelet/pods/4e9bbf1f-b039-4112-ab71-308535065091/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.066585 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59eff57d-cb92-4c52-aad2-6e43b3908fd4" path="/var/lib/kubelet/pods/59eff57d-cb92-4c52-aad2-6e43b3908fd4/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.068346 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845c3343-246e-4309-bd46-9bcd92cad574" path="/var/lib/kubelet/pods/845c3343-246e-4309-bd46-9bcd92cad574/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.069662 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ce540c-4925-43fa-b0aa-ef474912f60e" path="/var/lib/kubelet/pods/a5ce540c-4925-43fa-b0aa-ef474912f60e/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.072724 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b242f466-9049-49a9-b655-b270790de9ce" path="/var/lib/kubelet/pods/b242f466-9049-49a9-b655-b270790de9ce/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.075241 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb576787-90a5-4e81-a047-6fcf37921335" path="/var/lib/kubelet/pods/bb576787-90a5-4e81-a047-6fcf37921335/volumes" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.090556 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.090746 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.090955 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.091062 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data podName:b21357e1-82c9-419a-a191-359c84d6d001 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:30.091037097 +0000 UTC m=+1504.728994493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data") pod "rabbitmq-cell1-server-0" (UID: "b21357e1-82c9-419a-a191-359c84d6d001") : configmap "rabbitmq-cell1-config-data" not found Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.134520 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.154385 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.165830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data" (OuterVolumeSpecName: "config-data") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.169562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.169796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "17d6e828-fc05-46cb-9bee-bac08ebf331a" (UID: "17d6e828-fc05-46cb-9bee-bac08ebf331a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.171584 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data" (OuterVolumeSpecName: "config-data") pod "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" (UID: "6d5b8a42-39dd-4b1b-9f92-1e3585b6707b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.174453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.191966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.192881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") pod \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\" (UID: \"aca8dfc0-f915-4696-95c1-3c232f2ea35a\") " Jan 30 16:47:22 crc kubenswrapper[4766]: W0130 16:47:22.193148 4766 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aca8dfc0-f915-4696-95c1-3c232f2ea35a/volumes/kubernetes.io~secret/public-tls-certs Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aca8dfc0-f915-4696-95c1-3c232f2ea35a" (UID: "aca8dfc0-f915-4696-95c1-3c232f2ea35a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193630 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193717 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193798 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193909 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.193987 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17d6e828-fc05-46cb-9bee-bac08ebf331a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194051 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194129 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca8dfc0-f915-4696-95c1-3c232f2ea35a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.194264 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.282424 4766 scope.go:117] "RemoveContainer" containerID="7cb223d43c8f7f218cb3801a506f0b8a1c37370133be56bce90a766f5556e3ab" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.317198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.321372 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.344137 4766 scope.go:117] "RemoveContainer" containerID="7a019f6cf432acd6921c269ed116db1aa5dfd42bb062f9567ee28226592d75f9" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.346912 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.359955 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.372848 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.374813 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.377804 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.383488 4766 scope.go:117] "RemoveContainer" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.390877 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399078 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399140 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399387 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399660 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399692 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399716 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399795 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") pod \"61f7793d-39bd-4e96-a857-7de972f0c76d\" (UID: \"61f7793d-39bd-4e96-a857-7de972f0c76d\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399815 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399896 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") pod \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\" (UID: \"908c7fd8-c07e-463e-94c4-76980a3a8ba2\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.399951 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") pod \"22d60b44-40c9-425e-8daf-8931a25954e0\" (UID: \"22d60b44-40c9-425e-8daf-8931a25954e0\") " Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.400456 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.400519 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:23.400498684 +0000 UTC m=+1498.038456030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.401739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.403284 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.407856 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.408503 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data" (OuterVolumeSpecName: "config-data") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.409611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs" (OuterVolumeSpecName: "logs") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.428007 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts" (OuterVolumeSpecName: "scripts") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz" (OuterVolumeSpecName: "kube-api-access-h9fwz") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "kube-api-access-h9fwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx" (OuterVolumeSpecName: "kube-api-access-mbnzx") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "kube-api-access-mbnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430705 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc" (OuterVolumeSpecName: "kube-api-access-cflcc") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "kube-api-access-cflcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.430771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.454800 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.458895 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.459631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.469065 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.485507 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data" (OuterVolumeSpecName: "config-data") pod "22d60b44-40c9-425e-8daf-8931a25954e0" (UID: "22d60b44-40c9-425e-8daf-8931a25954e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.493339 4766 scope.go:117] "RemoveContainer" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501897 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501917 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501944 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.501963 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502014 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502114 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") pod \"d13e6f63-37d4-4780-9902-430a9669901c\" (UID: \"d13e6f63-37d4-4780-9902-430a9669901c\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502183 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502211 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") pod \"063ebe65-0175-443e-8c75-5018c42b3f36\" (UID: \"063ebe65-0175-443e-8c75-5018c42b3f36\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.502418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503852 4766 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503896 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/063ebe65-0175-443e-8c75-5018c42b3f36-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503914 4766 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503929 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fwz\" (UniqueName: \"kubernetes.io/projected/22d60b44-40c9-425e-8daf-8931a25954e0-kube-api-access-h9fwz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503947 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d60b44-40c9-425e-8daf-8931a25954e0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503960 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503972 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbnzx\" (UniqueName: \"kubernetes.io/projected/61f7793d-39bd-4e96-a857-7de972f0c76d-kube-api-access-mbnzx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.503992 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504009 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cflcc\" (UniqueName: \"kubernetes.io/projected/908c7fd8-c07e-463e-94c4-76980a3a8ba2-kube-api-access-cflcc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504024 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504038 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504056 4766 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/908c7fd8-c07e-463e-94c4-76980a3a8ba2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504069 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504082 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d60b44-40c9-425e-8daf-8931a25954e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.504094 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/61f7793d-39bd-4e96-a857-7de972f0c76d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.517124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs" (OuterVolumeSpecName: "logs") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.517751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts" (OuterVolumeSpecName: "scripts") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518028 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518657 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.518811 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.519737 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r" (OuterVolumeSpecName: "kube-api-access-26q8r") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "kube-api-access-26q8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.520919 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.522233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx" (OuterVolumeSpecName: "kube-api-access-rbwrx") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "kube-api-access-rbwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.535723 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.536017 4766 scope.go:117] "RemoveContainer" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.550024 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.551570 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": container with ID starting with a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425 not found: ID does not exist" containerID="a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.551621 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425"} err="failed to get container status \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": rpc error: code = NotFound desc = could not find container \"a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425\": container with ID starting with a67df9afdf256b25cf9bda3ba94d84b8fbd05c6c568cae7f599293e1949d5425 not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.551653 4766 scope.go:117] "RemoveContainer" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.562141 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": container with ID starting with ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc not found: ID does not exist" containerID="ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.562220 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc"} err="failed to get container status \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": rpc error: code = NotFound desc = could not find container \"ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc\": container with ID starting with ae7dde53148ed0648831b0890e97ecd78fb3fcfcd4a2cb421fec3c285c7bcafc not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.562252 4766 scope.go:117] "RemoveContainer" containerID="078a351f4bbfda381f7eaea97874a2d3cad8f7b02bef769bcb410ba868b12250" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.570882 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.572505 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.578407 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.580542 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.594642 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.597956 4766 scope.go:117] "RemoveContainer" containerID="7cabed8561645b99877a1c2df47b93e7663d97c477d7b28bd91f347a72034772" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606637 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606729 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606875 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.606906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.607025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") pod \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\" (UID: \"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22\") " Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.613298 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts" (OuterVolumeSpecName: "scripts") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616708 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616742 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616758 4766 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616771 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d13e6f63-37d4-4780-9902-430a9669901c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616782 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616796 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbwrx\" (UniqueName: \"kubernetes.io/projected/d13e6f63-37d4-4780-9902-430a9669901c-kube-api-access-rbwrx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616808 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616820 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26q8r\" (UniqueName: \"kubernetes.io/projected/063ebe65-0175-443e-8c75-5018c42b3f36-kube-api-access-26q8r\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.616833 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.620547 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.622134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config" (OuterVolumeSpecName: "config") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.651372 4766 scope.go:117] "RemoveContainer" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.657299 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.659133 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q" (OuterVolumeSpecName: "kube-api-access-p2t7q") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "kube-api-access-p2t7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.672826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.690364 4766 scope.go:117] "RemoveContainer" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.701943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.716810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.717963 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.717991 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718004 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718019 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2t7q\" (UniqueName: \"kubernetes.io/projected/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-kube-api-access-p2t7q\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718034 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.718045 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.719378 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "61f7793d-39bd-4e96-a857-7de972f0c76d" (UID: "61f7793d-39bd-4e96-a857-7de972f0c76d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.725366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data" (OuterVolumeSpecName: "config-data") pod "908c7fd8-c07e-463e-94c4-76980a3a8ba2" (UID: "908c7fd8-c07e-463e-94c4-76980a3a8ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.726324 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69d8797fb6-zzsfd"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.729530 4766 scope.go:117] "RemoveContainer" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.730274 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": container with ID starting with f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d not found: ID does not exist" containerID="f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.730309 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d"} err="failed to get container status \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": rpc error: code = NotFound desc = could not find container \"f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d\": container with ID starting with f5c26f650562d5378425a19e3a733be4cbcd37edac52306372d412afb45a409d not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.730345 4766 scope.go:117] "RemoveContainer" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: E0130 16:47:22.732525 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": container with ID starting with e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc not found: ID does not exist" containerID="e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.732557 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc"} err="failed to get container status \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": rpc error: code = NotFound desc = could not find container \"e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc\": container with ID starting with e3f046bd0100af048e190e7bb7dc2c7ede76a6587bf6a06b51be084ec53ea1dc not found: ID does not exist" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.732578 4766 scope.go:117] "RemoveContainer" containerID="e1c9c044f33b3da34602b78fc59451988ca7b3d5b492d71105b99eb5384541ae" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.757943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.765810 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.768128 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data" (OuterVolumeSpecName: "config-data") pod "d13e6f63-37d4-4780-9902-430a9669901c" (UID: "d13e6f63-37d4-4780-9902-430a9669901c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.768950 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.791392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" (UID: "9f9f648f-36fc-4ab4-9e08-cf4e01e30f22"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.804929 4766 scope.go:117] "RemoveContainer" containerID="13f1ad493c49e69abd03b3b6444cd83dde3cd1df4412312365d88ef9307e7a64" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.805115 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.813022 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b946b75c8-zb6q6"] Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819110 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d13e6f63-37d4-4780-9902-430a9669901c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819152 4766 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f7793d-39bd-4e96-a857-7de972f0c76d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819167 4766 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819284 4766 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.819299 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908c7fd8-c07e-463e-94c4-76980a3a8ba2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.870292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"61f7793d-39bd-4e96-a857-7de972f0c76d","Type":"ContainerDied","Data":"38540b330474d27ec43c9b991dc1ee2efa4d90bf561735549986060c7b3311d2"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.870413 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.878301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" event={"ID":"22d60b44-40c9-425e-8daf-8931a25954e0","Type":"ContainerDied","Data":"c7517f7d6af60d2837e96c3e702ddd2f2f09fff46823d6dc0045b42053075fb3"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.878669 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c649fd446-flqwn" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.890211 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.890210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"063ebe65-0175-443e-8c75-5018c42b3f36","Type":"ContainerDied","Data":"edc0ddf8609d91064e135d7b1badffa0f2b9c01a737dbf1954007ac34a36f143"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.905877 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/ovn-northd/0.log" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.906011 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.906481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9f9f648f-36fc-4ab4-9e08-cf4e01e30f22","Type":"ContainerDied","Data":"090eddff40a00fe6ea2b9a4d39ef4e8496a69421f9440b673916d296607e29b3"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data" (OuterVolumeSpecName: "config-data") pod "063ebe65-0175-443e-8c75-5018c42b3f36" (UID: "063ebe65-0175-443e-8c75-5018c42b3f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"908c7fd8-c07e-463e-94c4-76980a3a8ba2","Type":"ContainerDied","Data":"9e20509f1f367971ebad4df00092bfa9e6a737cd37ee5f2217bf7f1fb1c22b6c"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.918693 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.920728 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063ebe65-0175-443e-8c75-5018c42b3f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.926299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6c45fdd9-srlkx" event={"ID":"d13e6f63-37d4-4780-9902-430a9669901c","Type":"ContainerDied","Data":"2b767d9a62146b9e45249c95c9dbe239af5e99c61039ee01f25412d61a3eb409"} Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.926392 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6c45fdd9-srlkx" Jan 30 16:47:22 crc kubenswrapper[4766]: I0130 16:47:22.958692 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e3be-account-create-update-qnsph" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.069100 4766 scope.go:117] "RemoveContainer" containerID="83eef1fac3cc96895ab4ddd98d9e41ad0d9179a5c5f100993449cfa02dfc79ae" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.085620 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.104638 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.111768 4766 scope.go:117] "RemoveContainer" containerID="e32b2cafc5c1ce2a47e798839cf2284131d3d57bc770f6871e99b00c69493387" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.142238 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.173228 4766 scope.go:117] "RemoveContainer" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.180947 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-d6c45fdd9-srlkx"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.202251 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.226362 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.227869 4766 scope.go:117] "RemoveContainer" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.241993 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.256716 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5c649fd446-flqwn"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.284335 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.289703 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e3be-account-create-update-qnsph"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.298960 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.302545 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.304537 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.320963 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.337149 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34adc844-a813-4bb0-9d46-131d1b5a7b9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.337252 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn9rq\" (UniqueName: \"kubernetes.io/projected/34adc844-a813-4bb0-9d46-131d1b5a7b9b-kube-api-access-jn9rq\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.355801 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.358703 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.358776 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.361839 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.363130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.363310 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.364608 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.364650 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.404492 4766 scope.go:117] "RemoveContainer" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.405693 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": container with ID starting with b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5 not found: ID does not exist" containerID="b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.405735 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5"} err="failed to get container status \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": rpc error: code = NotFound desc = could not find container \"b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5\": container with ID starting with b76d74d517695b954352727012ec69dbd68335889d86f736409624bd420dd1d5 not found: ID does not exist" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.405761 4766 scope.go:117] "RemoveContainer" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.407532 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": container with ID starting with c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1 not found: ID does not exist" containerID="c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.407569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1"} err="failed to get container status \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": rpc error: code = NotFound desc = could not find container \"c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1\": container with ID starting with c8cf61cca49f5913c4f404f73dad03aeee69e0fc8dd7632010cae106dffa30c1 not found: ID does not exist" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.407583 4766 scope.go:117] "RemoveContainer" containerID="7526886bd5bb2b792b565e84d6fd278abe954f56801bb63be7f6750c601e890f" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.433445 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.436788 4766 scope.go:117] "RemoveContainer" containerID="812a3e23be177e19676f6003e9e0ddb46880fe309badbba4e93d1efe04dcf597" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.438250 4766 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.438297 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts podName:768238f5-b74e-4f23-91ec-4eeb69375025 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:25.438285379 +0000 UTC m=+1500.076242725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts") pod "root-account-create-update-zlndr" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025") : configmap "openstack-scripts" not found Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.470780 4766 scope.go:117] "RemoveContainer" containerID="712f1ec6de09438090f58fbb0c4f302531a0e53b3ab1025ce983291fe2a30a55" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.498430 4766 scope.go:117] "RemoveContainer" containerID="a33a51c4ce72a3331d749a25239fbd5adeae2f5c2b9a417968c58a83c32f6d49" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.513116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539707 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539825 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539870 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539913 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.539990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.540015 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.540040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") pod \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\" (UID: \"62dd6ad1-1550-48cf-b103-b7ab6dd93c97\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.541357 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.542017 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543608 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.543922 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.555654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv" (OuterVolumeSpecName: "kube-api-access-t4qrv") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "kube-api-access-t4qrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.570671 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.571408 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.573498 4766 scope.go:117] "RemoveContainer" containerID="e5049dc222f6a4c60730423ca57b88c9c36337971b3ab52ed5de35266e17e533" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.611653 4766 scope.go:117] "RemoveContainer" containerID="722b9f0bf4bb4fdc169a16a2a0008b553646c69b6b43ec117a7046c04ee677ad" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.619907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "62dd6ad1-1550-48cf-b103-b7ab6dd93c97" (UID: "62dd6ad1-1550-48cf-b103-b7ab6dd93c97"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.643615 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") pod \"768238f5-b74e-4f23-91ec-4eeb69375025\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.643857 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") pod \"768238f5-b74e-4f23-91ec-4eeb69375025\" (UID: \"768238f5-b74e-4f23-91ec-4eeb69375025\") " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644304 4766 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644322 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644353 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4qrv\" (UniqueName: \"kubernetes.io/projected/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-kube-api-access-t4qrv\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644364 4766 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644371 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644389 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644398 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62dd6ad1-1550-48cf-b103-b7ab6dd93c97-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.644449 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768238f5-b74e-4f23-91ec-4eeb69375025" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.647578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497" (OuterVolumeSpecName: "kube-api-access-fc497") pod "768238f5-b74e-4f23-91ec-4eeb69375025" (UID: "768238f5-b74e-4f23-91ec-4eeb69375025"). InnerVolumeSpecName "kube-api-access-fc497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.652478 4766 scope.go:117] "RemoveContainer" containerID="1018ad035e1117daba7d0fa6d624c300af7a28f4b34f661587a2d4823b6112f1" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.663436 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.674520 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7bc6f65df6-mx4xk" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.150:5000/v3\": read tcp 10.217.0.2:45678->10.217.0.150:5000: read: connection reset by peer" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.697024 4766 scope.go:117] "RemoveContainer" containerID="858741e925270a4f1dbc19a53c612cec0223b237f4d6e8b8741323f1a01a83e4" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.737337 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.739399 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.740650 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.740726 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.743361 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746214 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc497\" (UniqueName: \"kubernetes.io/projected/768238f5-b74e-4f23-91ec-4eeb69375025-kube-api-access-fc497\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746246 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768238f5-b74e-4f23-91ec-4eeb69375025-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.746258 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.749744 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.755334 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.755414 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.765379 4766 scope.go:117] "RemoveContainer" containerID="3a4e2d5078fd2eacb9382be606cd830ba0289dae57441c51076a58524a7c71f4" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.805033 4766 scope.go:117] "RemoveContainer" containerID="69d64425bbacf9da73461e63012a983fa8ef6f8440c070018088e050cf6bc5a6" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.905336 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.908918 4766 scope.go:117] "RemoveContainer" containerID="1fe4777b2695557b65a6f9a91a3f309b01c42b5f0288bbecc862c67c0bda120a" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.945428 4766 scope.go:117] "RemoveContainer" containerID="e3fbc192fdad733807e36f2325831d022e561f39e323dd8f0e5a0da778a417b6" Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.964645 4766 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 16:47:23 crc kubenswrapper[4766]: E0130 16:47:23.964727 4766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data podName:bc2a138c-9abd-427b-815c-cbb9e12459f6 nodeName:}" failed. No retries permitted until 2026-01-30 16:47:31.964707957 +0000 UTC m=+1506.602665303 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data") pod "rabbitmq-server-0" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6") : configmap "rabbitmq-config-data" not found Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.976679 4766 scope.go:117] "RemoveContainer" containerID="929f2cc066366dea699ff53637f354d8aeab119c1be0aa3851b50d5090307472" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987287 4766 generic.go:334] "Generic (PLEG): container finished" podID="b21357e1-82c9-419a-a191-359c84d6d001" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987387 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987401 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.987467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b21357e1-82c9-419a-a191-359c84d6d001","Type":"ContainerDied","Data":"3e10ead1aca56572964d46a5892bb1dffdbbed95ee78ced09f4df00421ff6107"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.990097 4766 generic.go:334] "Generic (PLEG): container finished" podID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerID="40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.990191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9"} Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.995194 4766 generic.go:334] "Generic (PLEG): container finished" podID="821de7d3-dc41-4351-bced-6ed09a729223" containerID="7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" exitCode=0 Jan 30 16:47:23 crc kubenswrapper[4766]: I0130 16:47:23.995211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerDied","Data":"7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.001120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zlndr" event={"ID":"768238f5-b74e-4f23-91ec-4eeb69375025","Type":"ContainerDied","Data":"51ffbc2026ffaf4c9f26fd55d50669f8d3b947029fdc717ba29a5acfdc7e97bf"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.001149 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zlndr" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.002521 4766 scope.go:117] "RemoveContainer" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016724 4766 generic.go:334] "Generic (PLEG): container finished" podID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" exitCode=0 Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016808 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62dd6ad1-1550-48cf-b103-b7ab6dd93c97","Type":"ContainerDied","Data":"7cd3716ef2ba5300e2a9e059a29e8e25763df286461c739788ee844a36ee0a0f"} Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.016902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.040679 4766 scope.go:117] "RemoveContainer" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.071672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072288 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072327 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072351 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072449 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072466 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072574 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.072599 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") pod \"b21357e1-82c9-419a-a191-359c84d6d001\" (UID: \"b21357e1-82c9-419a-a191-359c84d6d001\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.081982 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.082444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.082538 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.083119 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx" (OuterVolumeSpecName: "kube-api-access-vjnbx") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "kube-api-access-vjnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.087359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info" (OuterVolumeSpecName: "pod-info") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.100807 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.102424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.108369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.115467 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" path="/var/lib/kubelet/pods/063ebe65-0175-443e-8c75-5018c42b3f36/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.116395 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ae2453-74fa-4114-9261-21b381518493" path="/var/lib/kubelet/pods/14ae2453-74fa-4114-9261-21b381518493/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.116941 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" path="/var/lib/kubelet/pods/17d6e828-fc05-46cb-9bee-bac08ebf331a/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118001 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" path="/var/lib/kubelet/pods/22d60b44-40c9-425e-8daf-8931a25954e0/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118497 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34adc844-a813-4bb0-9d46-131d1b5a7b9b" path="/var/lib/kubelet/pods/34adc844-a813-4bb0-9d46-131d1b5a7b9b/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.118854 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" path="/var/lib/kubelet/pods/40f1dc52-213f-4a5b-af33-4067a83859e4/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.122536 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" path="/var/lib/kubelet/pods/447a8ec3-4e50-40a9-b418-01fd8c0eb03e/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.123150 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" path="/var/lib/kubelet/pods/4bc2931b-8439-4c5c-be4d-43f4aab528f2/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.123806 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" path="/var/lib/kubelet/pods/61f7793d-39bd-4e96-a857-7de972f0c76d/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.124357 4766 scope.go:117] "RemoveContainer" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.127998 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" path="/var/lib/kubelet/pods/6d5b8a42-39dd-4b1b-9f92-1e3585b6707b/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.128782 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" path="/var/lib/kubelet/pods/908c7fd8-c07e-463e-94c4-76980a3a8ba2/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.135857 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": container with ID starting with db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920 not found: ID does not exist" containerID="db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.136316 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920"} err="failed to get container status \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": rpc error: code = NotFound desc = could not find container \"db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920\": container with ID starting with db7977d3774b9fe20bfc32eaf113a99ac43aaf8d33fb949d557be22220077920 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.136437 4766 scope.go:117] "RemoveContainer" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.137012 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" path="/var/lib/kubelet/pods/9ad68dc2-23ff-4044-b74d-149ae8f02bc0/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.140299 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": container with ID starting with 9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d not found: ID does not exist" containerID="9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.140334 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d"} err="failed to get container status \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": rpc error: code = NotFound desc = could not find container \"9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d\": container with ID starting with 9ef10f5c3d1f4d9a253dd3e9c4606457ea1bec6b576606c275600dbaeca7eb9d not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.140359 4766 scope.go:117] "RemoveContainer" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.143098 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" path="/var/lib/kubelet/pods/9f9f648f-36fc-4ab4-9e08-cf4e01e30f22/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.143859 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13e6f63-37d4-4780-9902-430a9669901c" path="/var/lib/kubelet/pods/d13e6f63-37d4-4780-9902-430a9669901c/volumes" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.166900 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf" (OuterVolumeSpecName: "server-conf") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.167704 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data" (OuterVolumeSpecName: "config-data") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174276 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174449 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174520 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174592 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174684 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b21357e1-82c9-419a-a191-359c84d6d001-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174743 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b21357e1-82c9-419a-a191-359c84d6d001-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174794 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174846 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjnbx\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-kube-api-access-vjnbx\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174897 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.174953 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b21357e1-82c9-419a-a191-359c84d6d001-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200589 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200630 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zlndr"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200648 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.200661 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.213315 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b21357e1-82c9-419a-a191-359c84d6d001" (UID: "b21357e1-82c9-419a-a191-359c84d6d001"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.215432 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.239368 4766 scope.go:117] "RemoveContainer" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.265652 4766 scope.go:117] "RemoveContainer" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.271592 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": container with ID starting with aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399 not found: ID does not exist" containerID="aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.271637 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399"} err="failed to get container status \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": rpc error: code = NotFound desc = could not find container \"aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399\": container with ID starting with aff944adaf195dece4b9f2cffe288741329b6e6257d83a2eb304dfe74183c399 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.271664 4766 scope.go:117] "RemoveContainer" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: E0130 16:47:24.272109 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": container with ID starting with 6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171 not found: ID does not exist" containerID="6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.272162 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171"} err="failed to get container status \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": rpc error: code = NotFound desc = could not find container \"6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171\": container with ID starting with 6760427dc9415b24501e486881c4fff8d34a1c94fce91dc3276f0bac6cf59171 not found: ID does not exist" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.276477 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b21357e1-82c9-419a-a191-359c84d6d001-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.276527 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.346259 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.358752 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.576381 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.593737 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.682916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.682985 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683018 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683059 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683123 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683160 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683259 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683285 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683331 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683360 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683414 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683441 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683483 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683540 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") pod \"bc2a138c-9abd-427b-815c-cbb9e12459f6\" (UID: \"bc2a138c-9abd-427b-815c-cbb9e12459f6\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683579 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.683606 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") pod \"821de7d3-dc41-4351-bced-6ed09a729223\" (UID: \"821de7d3-dc41-4351-bced-6ed09a729223\") " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.684400 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.686673 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688649 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.688966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k" (OuterVolumeSpecName: "kube-api-access-kbx8k") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "kube-api-access-kbx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.690818 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info" (OuterVolumeSpecName: "pod-info") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.692268 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.693768 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6" (OuterVolumeSpecName: "kube-api-access-pxtx6") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "kube-api-access-pxtx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.695995 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts" (OuterVolumeSpecName: "scripts") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.699258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.710893 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data" (OuterVolumeSpecName: "config-data") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.717345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data" (OuterVolumeSpecName: "config-data") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.740469 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.759135 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.763350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.784891 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "821de7d3-dc41-4351-bced-6ed09a729223" (UID: "821de7d3-dc41-4351-bced-6ed09a729223"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789202 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789249 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789265 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789280 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbx8k\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-kube-api-access-kbx8k\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789293 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789305 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2a138c-9abd-427b-815c-cbb9e12459f6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789317 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtx6\" (UniqueName: \"kubernetes.io/projected/821de7d3-dc41-4351-bced-6ed09a729223-kube-api-access-pxtx6\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789330 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789341 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789352 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789365 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789376 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789388 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789399 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2a138c-9abd-427b-815c-cbb9e12459f6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789410 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789420 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/821de7d3-dc41-4351-bced-6ed09a729223-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789457 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.789471 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2a138c-9abd-427b-815c-cbb9e12459f6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.824471 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bc2a138c-9abd-427b-815c-cbb9e12459f6" (UID: "bc2a138c-9abd-427b-815c-cbb9e12459f6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.846520 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.891286 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2a138c-9abd-427b-815c-cbb9e12459f6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:24 crc kubenswrapper[4766]: I0130 16:47:24.891323 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034836 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bc2a138c-9abd-427b-815c-cbb9e12459f6","Type":"ContainerDied","Data":"737ac00e5e8f2d0fe8c8cc8ad014b2d9c4eb214f4c0587d701ecfb018001f677"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034872 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.034894 4766 scope.go:117] "RemoveContainer" containerID="40a3ac01470631f3856774db28b8f61347a07c88a9ecabdd8c4a7fdd55f65bf9" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.039051 4766 generic.go:334] "Generic (PLEG): container finished" podID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" exitCode=0 Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.039109 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerDied","Data":"7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.041050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bc6f65df6-mx4xk" event={"ID":"821de7d3-dc41-4351-bced-6ed09a729223","Type":"ContainerDied","Data":"f7e59fee20a8c8c4ebf0975c2f9adc338f4c7ce8ad17f7e1383af919425199ff"} Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.041204 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bc6f65df6-mx4xk" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.066723 4766 scope.go:117] "RemoveContainer" containerID="420bba712e788513308111db89ced03a759c0a7dc6262370124c82df4dd31af5" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.099585 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.120085 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.129232 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.131926 4766 scope.go:117] "RemoveContainer" containerID="7fedc7578cd65e1da9885d991db738315a5357e363187467c355ed6389131188" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.146527 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7bc6f65df6-mx4xk"] Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.535083 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.691693 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705603 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705761 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.705801 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") pod \"4f217490-8a26-4f4b-935b-fe5918500948\" (UID: \"4f217490-8a26-4f4b-935b-fe5918500948\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.713121 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz" (OuterVolumeSpecName: "kube-api-access-jmrmz") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "kube-api-access-jmrmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.736320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.743718 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data" (OuterVolumeSpecName: "config-data") pod "4f217490-8a26-4f4b-935b-fe5918500948" (UID: "4f217490-8a26-4f4b-935b-fe5918500948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807436 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807506 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.807607 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") pod \"7fa69536-b701-43a4-814a-2ba16974b1dd\" (UID: \"7fa69536-b701-43a4-814a-2ba16974b1dd\") " Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808110 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmrmz\" (UniqueName: \"kubernetes.io/projected/4f217490-8a26-4f4b-935b-fe5918500948-kube-api-access-jmrmz\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808139 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.808152 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f217490-8a26-4f4b-935b-fe5918500948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.811171 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p" (OuterVolumeSpecName: "kube-api-access-5r45p") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "kube-api-access-5r45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.825579 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.830512 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data" (OuterVolumeSpecName: "config-data") pod "7fa69536-b701-43a4-814a-2ba16974b1dd" (UID: "7fa69536-b701-43a4-814a-2ba16974b1dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910075 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r45p\" (UniqueName: \"kubernetes.io/projected/7fa69536-b701-43a4-814a-2ba16974b1dd-kube-api-access-5r45p\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910132 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:25 crc kubenswrapper[4766]: I0130 16:47:25.910144 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa69536-b701-43a4-814a-2ba16974b1dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.047678 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" path="/var/lib/kubelet/pods/62dd6ad1-1550-48cf-b103-b7ab6dd93c97/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.048422 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768238f5-b74e-4f23-91ec-4eeb69375025" path="/var/lib/kubelet/pods/768238f5-b74e-4f23-91ec-4eeb69375025/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.048975 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821de7d3-dc41-4351-bced-6ed09a729223" path="/var/lib/kubelet/pods/821de7d3-dc41-4351-bced-6ed09a729223/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.050254 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21357e1-82c9-419a-a191-359c84d6d001" path="/var/lib/kubelet/pods/b21357e1-82c9-419a-a191-359c84d6d001/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.052077 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" path="/var/lib/kubelet/pods/bc2a138c-9abd-427b-815c-cbb9e12459f6/volumes" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7fa69536-b701-43a4-814a-2ba16974b1dd","Type":"ContainerDied","Data":"dc9c6135c4c38d623c7e0c8ee4ec3b3b5ccbc4d503c09310d8f4f5dcfd14f0b7"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.065458 4766 scope.go:117] "RemoveContainer" containerID="7d4fac86f391b975d4d442ab4194690e25c69e0569d23636e3c1ed6941b267b8" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069765 4766 generic.go:334] "Generic (PLEG): container finished" podID="4f217490-8a26-4f4b-935b-fe5918500948" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" exitCode=0 Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069801 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerDied","Data":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.069831 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f217490-8a26-4f4b-935b-fe5918500948","Type":"ContainerDied","Data":"f056061bd522d3379f642d93301ecddb3bb56cae94292cc340f18fe39f2e4f4b"} Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.095302 4766 scope.go:117] "RemoveContainer" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.101293 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.116886 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122152 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122240 4766 scope.go:117] "RemoveContainer" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: E0130 16:47:26.122734 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": container with ID starting with 49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884 not found: ID does not exist" containerID="49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.122773 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884"} err="failed to get container status \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": rpc error: code = NotFound desc = could not find container \"49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884\": container with ID starting with 49f1b21f2e3d5b4e03c8ce1eb39f56c81d4a5fbe999ee27e98fc4e4b585ae884 not found: ID does not exist" Jan 30 16:47:26 crc kubenswrapper[4766]: I0130 16:47:26.127123 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:47:27 crc kubenswrapper[4766]: I0130 16:47:27.188225 4766 scope.go:117] "RemoveContainer" containerID="d0d3a385994a831e8571ce1c7041fd4ec8f5ca6264fb5b4f4e85ee29e52f53f1" Jan 30 16:47:28 crc kubenswrapper[4766]: I0130 16:47:28.049281 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f217490-8a26-4f4b-935b-fe5918500948" path="/var/lib/kubelet/pods/4f217490-8a26-4f4b-935b-fe5918500948/volumes" Jan 30 16:47:28 crc kubenswrapper[4766]: I0130 16:47:28.051662 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" path="/var/lib/kubelet/pods/7fa69536-b701-43a4-814a-2ba16974b1dd/volumes" Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.734443 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.734549 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.735844 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740351 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.740579 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.741813 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:28 crc kubenswrapper[4766]: E0130 16:47:28.741848 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.734288 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.734920 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735100 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735255 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.735321 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.738130 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.739445 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:33 crc kubenswrapper[4766]: E0130 16:47:33.739484 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.734232 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735306 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735601 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735627 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.735958 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.737168 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.738994 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:38 crc kubenswrapper[4766]: E0130 16:47:38.739027 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.045312 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.045671 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.187539 4766 generic.go:334] "Generic (PLEG): container finished" podID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerID="2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" exitCode=0 Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.187595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666"} Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.263166 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437439 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437505 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437535 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437664 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437682 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.437765 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") pod \"533a3663-0294-48ef-b771-1f5fb3ae05ab\" (UID: \"533a3663-0294-48ef-b771-1f5fb3ae05ab\") " Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.443207 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.443709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4" (OuterVolumeSpecName: "kube-api-access-8jfm4") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "kube-api-access-8jfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.474445 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.475008 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.477307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.478628 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config" (OuterVolumeSpecName: "config") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.491607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "533a3663-0294-48ef-b771-1f5fb3ae05ab" (UID: "533a3663-0294-48ef-b771-1f5fb3ae05ab"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539503 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539551 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539560 4766 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539570 4766 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539579 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539587 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jfm4\" (UniqueName: \"kubernetes.io/projected/533a3663-0294-48ef-b771-1f5fb3ae05ab-kube-api-access-8jfm4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:39 crc kubenswrapper[4766]: I0130 16:47:39.539596 4766 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533a3663-0294-48ef-b771-1f5fb3ae05ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.198874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdf9c45-5nxgr" event={"ID":"533a3663-0294-48ef-b771-1f5fb3ae05ab","Type":"ContainerDied","Data":"c0a3cd47bf6f73c69d465e105e571ff0dfdead63ace53c2387dc41608358f285"} Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.199449 4766 scope.go:117] "RemoveContainer" containerID="7b8bf066636272b652b67ba985eba08e74de13009f953d0190f16c41f92e8863" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.198932 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdf9c45-5nxgr" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.223269 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.229454 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d4bdf9c45-5nxgr"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.229939 4766 scope.go:117] "RemoveContainer" containerID="2ef26908ff305b23e8e962f558b46195015a464a6f4ddf9d9d52d4e04bf0f666" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399218 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399622 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399640 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399652 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399658 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399673 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399680 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399689 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399695 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399706 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399713 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399721 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-utilities" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399727 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-utilities" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399740 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399745 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399752 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-content" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399758 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="extract-content" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399769 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399786 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399793 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399802 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399808 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399815 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399821 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399831 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399844 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399850 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399858 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399864 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399873 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399879 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399895 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399903 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399908 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399918 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399932 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399938 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399947 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399953 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399971 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399981 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.399988 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.399999 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400017 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400023 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400031 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400038 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400050 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400057 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400066 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400072 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400083 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400089 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="mysql-bootstrap" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400098 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400105 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400115 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400135 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400143 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400156 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400165 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400193 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400199 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400208 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400213 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400225 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400231 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400241 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400246 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400253 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400267 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400273 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400284 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400291 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400301 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400306 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400312 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400318 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400327 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400333 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="setup-container" Jan 30 16:47:40 crc kubenswrapper[4766]: E0130 16:47:40.400344 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400351 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400498 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dd6ad1-1550-48cf-b103-b7ab6dd93c97" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400512 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21357e1-82c9-419a-a191-359c84d6d001" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400525 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400537 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="proxy-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400547 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400557 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400567 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400576 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="ovn-northd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400587 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad68dc2-23ff-4044-b74d-149ae8f02bc0" containerName="galera" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400594 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f7793d-39bd-4e96-a857-7de972f0c76d" containerName="memcached" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400604 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400615 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400628 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa69536-b701-43a4-814a-2ba16974b1dd" containerName="nova-cell1-conductor-conductor" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400640 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9f648f-36fc-4ab4-9e08-cf4e01e30f22" containerName="openstack-network-exporter" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400652 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400664 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" containerName="cinder-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400678 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-metadata" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400686 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="cinder-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400695 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2a138c-9abd-427b-815c-cbb9e12459f6" containerName="rabbitmq" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400701 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400710 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="821de7d3-dc41-4351-bced-6ed09a729223" containerName="keystone-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400719 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400727 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f1dc52-213f-4a5b-af33-4067a83859e4" containerName="nova-metadata-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400739 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400751 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ae2453-74fa-4114-9261-21b381518493" containerName="nova-api-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="063ebe65-0175-443e-8c75-5018c42b3f36" containerName="probe" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400771 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb576787-90a5-4e81-a047-6fcf37921335" containerName="kube-state-metrics" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400780 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="447a8ec3-4e50-40a9-b418-01fd8c0eb03e" containerName="placement-api" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400789 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d60b44-40c9-425e-8daf-8931a25954e0" containerName="barbican-keystone-listener" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400798 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-central-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400806 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="sg-core" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400814 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="908c7fd8-c07e-463e-94c4-76980a3a8ba2" containerName="ceilometer-notification-agent" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400826 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13e6f63-37d4-4780-9902-430a9669901c" containerName="barbican-worker-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400838 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c3343-246e-4309-bd46-9bcd92cad574" containerName="registry-server" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400846 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" containerName="neutron-httpd" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f217490-8a26-4f4b-935b-fe5918500948" containerName="nova-scheduler-scheduler" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400861 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc2931b-8439-4c5c-be4d-43f4aab528f2" containerName="glance-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.400878 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d6e828-fc05-46cb-9bee-bac08ebf331a" containerName="barbican-api-log" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.402602 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.411134 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.552168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.552993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.553044 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655445 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655552 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.655694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.656089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.656117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.676757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"redhat-marketplace-k5dgz\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:40 crc kubenswrapper[4766]: I0130 16:47:40.720487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:41 crc kubenswrapper[4766]: I0130 16:47:41.213009 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.049371 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533a3663-0294-48ef-b771-1f5fb3ae05ab" path="/var/lib/kubelet/pods/533a3663-0294-48ef-b771-1f5fb3ae05ab/volumes" Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.215969 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0" exitCode=0 Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.216019 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0"} Jan 30 16:47:42 crc kubenswrapper[4766]: I0130 16:47:42.216049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerStarted","Data":"8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64"} Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.733449 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734225 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734713 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.734756 4766 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.735457 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.736922 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.738312 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 16:47:43 crc kubenswrapper[4766]: E0130 16:47:43.738398 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-l6hkn" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:47:45 crc kubenswrapper[4766]: I0130 16:47:45.243720 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab" exitCode=0 Jan 30 16:47:45 crc kubenswrapper[4766]: I0130 16:47:45.243791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.266730 4766 generic.go:334] "Generic (PLEG): container finished" podID="8b182790-0761-450c-85d1-63ddd59ac10f" containerID="9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" exitCode=137 Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.266778 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.270002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerStarted","Data":"bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.272629 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.273507 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a501828-e06b-4096-b555-1ecd9323ee20" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" exitCode=137 Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.273659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9"} Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.819895 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.821514 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.880938 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956752 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956819 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.956937 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957000 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957085 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957094 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log" (OuterVolumeSpecName: "var-log") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957109 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") pod \"2a501828-e06b-4096-b555-1ecd9323ee20\" (UID: \"2a501828-e06b-4096-b555-1ecd9323ee20\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957164 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957199 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957507 4766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.957519 4766 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.958191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts" (OuterVolumeSpecName: "scripts") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.958218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run" (OuterVolumeSpecName: "var-run") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.959561 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache" (OuterVolumeSpecName: "cache") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.959869 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib" (OuterVolumeSpecName: "var-lib") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.960366 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock" (OuterVolumeSpecName: "lock") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.964305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v" (OuterVolumeSpecName: "kube-api-access-cp72v") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "kube-api-access-cp72v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.972597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4" (OuterVolumeSpecName: "kube-api-access-p2mp4") pod "2a501828-e06b-4096-b555-1ecd9323ee20" (UID: "2a501828-e06b-4096-b555-1ecd9323ee20"). InnerVolumeSpecName "kube-api-access-p2mp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:46 crc kubenswrapper[4766]: I0130 16:47:46.972773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8b182790-0761-450c-85d1-63ddd59ac10f\" (UID: \"8b182790-0761-450c-85d1-63ddd59ac10f\") " Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058434 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2mp4\" (UniqueName: \"kubernetes.io/projected/2a501828-e06b-4096-b555-1ecd9323ee20-kube-api-access-p2mp4\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058453 4766 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058463 4766 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-cache\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058473 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp72v\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-kube-api-access-cp72v\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058482 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a501828-e06b-4096-b555-1ecd9323ee20-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058489 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a501828-e06b-4096-b555-1ecd9323ee20-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058497 4766 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b182790-0761-450c-85d1-63ddd59ac10f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.058506 4766 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b182790-0761-450c-85d1-63ddd59ac10f-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.062585 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.159390 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.192956 4766 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.237703 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b182790-0761-450c-85d1-63ddd59ac10f" (UID: "8b182790-0761-450c-85d1-63ddd59ac10f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.260496 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b182790-0761-450c-85d1-63ddd59ac10f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.260524 4766 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b182790-0761-450c-85d1-63ddd59ac10f","Type":"ContainerDied","Data":"e2895452d8c205fa0d4dc996a2287e6197931bc707b2d07e3c6da2c761ed67e2"} Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290195 4766 scope.go:117] "RemoveContainer" containerID="9ef33fd7af0697eee6aa37a4f43e02cd1ff7caec575a2b12e994eb6a0549b3a1" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.290463 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.299898 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l6hkn_2a501828-e06b-4096-b555-1ecd9323ee20/ovs-vswitchd/0.log" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.300908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l6hkn" event={"ID":"2a501828-e06b-4096-b555-1ecd9323ee20","Type":"ContainerDied","Data":"f054a0fee68ab2bd51f8c1a2db002cd94be5729245e8ef0109de145c3c8117f0"} Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.300956 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l6hkn" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.330901 4766 scope.go:117] "RemoveContainer" containerID="fb57872e5fb6a58cc8c40e732147b1054a269fa84054e322cc2f52fa8c9c9ad5" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.337571 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5dgz" podStartSLOduration=3.595718713 podStartE2EDuration="7.337553161s" podCreationTimestamp="2026-01-30 16:47:40 +0000 UTC" firstStartedPulling="2026-01-30 16:47:42.217759063 +0000 UTC m=+1516.855716409" lastFinishedPulling="2026-01-30 16:47:45.959593511 +0000 UTC m=+1520.597550857" observedRunningTime="2026-01-30 16:47:47.334650349 +0000 UTC m=+1521.972607705" watchObservedRunningTime="2026-01-30 16:47:47.337553161 +0000 UTC m=+1521.975510507" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.365211 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.368507 4766 scope.go:117] "RemoveContainer" containerID="1867868d042226b0102d7af4efd2c5d0686e840d200dd33d6ec36968fc03fa94" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.378384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.385136 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.388386 4766 scope.go:117] "RemoveContainer" containerID="2de20de1c925cc2fe2631c488767f62edc5546cfa1bab3a9f5b3b5568ebd33bd" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.393083 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-l6hkn"] Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.413335 4766 scope.go:117] "RemoveContainer" containerID="cabff9d9eac1e96f01b9ae0ea6118276a0a0f7d8869b118376d2a160d9c95fbd" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.432485 4766 scope.go:117] "RemoveContainer" containerID="686b4de4bfb8090cbee7ffd8b429f45a75fa7f8db6a139284fa6c26cb4ebf320" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.450296 4766 scope.go:117] "RemoveContainer" containerID="93345e4db373057383a4e7560531f5f8dc222e4ea8e6511d8365b6b242bb9305" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.475938 4766 scope.go:117] "RemoveContainer" containerID="ed024a5d8346d6cba34ca8427849879c1c8708dd88d1dff2c821e85ba14d6f5d" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.496697 4766 scope.go:117] "RemoveContainer" containerID="3d565bf23f387505355fc88939efb3e922421c5ce2f3cce9972954f997abf7e9" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.517414 4766 scope.go:117] "RemoveContainer" containerID="7e0ee7c6c23df84239fa6a0f2dda7982f60b3b9413744489a50144073243e8be" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.535652 4766 scope.go:117] "RemoveContainer" containerID="4a378782d7a92d740e9d92e144de664ebf098b972f3febcbf7a8d0d8994d65c2" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.553709 4766 scope.go:117] "RemoveContainer" containerID="b33858618ac4f97b57ed3a00bf2ef12f457aa24b08e1a7b17d0bccf28da68819" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.582537 4766 scope.go:117] "RemoveContainer" containerID="8fb2a9d730e1fac1ed432db1aa83e0d89ad22b45725d36e0ee578815b9d18bd4" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.608362 4766 scope.go:117] "RemoveContainer" containerID="13a067c315d5248f25766b082e783d339afd79a237563ce5f91071342f2570b8" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.624862 4766 scope.go:117] "RemoveContainer" containerID="374f13cd2087a08f8eec3c99c6917ad293b1c5c6f50b2378b94b79cc272999d3" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.641878 4766 scope.go:117] "RemoveContainer" containerID="83993d94a8bc7f594d30caf8ddb5c055e031d5c9a949175233563c06d2f790e9" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.667491 4766 scope.go:117] "RemoveContainer" containerID="087b402dcf8991a64005b271355c27df0fa550008254a006dbfb3bc8943043f2" Jan 30 16:47:47 crc kubenswrapper[4766]: I0130 16:47:47.684926 4766 scope.go:117] "RemoveContainer" containerID="227e5efd4255dd7061992117871a77b87ce5c9b6b3d5ba505bf41d645da12be4" Jan 30 16:47:48 crc kubenswrapper[4766]: I0130 16:47:48.051423 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" path="/var/lib/kubelet/pods/2a501828-e06b-4096-b555-1ecd9323ee20/volumes" Jan 30 16:47:48 crc kubenswrapper[4766]: I0130 16:47:48.052429 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" path="/var/lib/kubelet/pods/8b182790-0761-450c-85d1-63ddd59ac10f/volumes" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.721214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.721507 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:50 crc kubenswrapper[4766]: I0130 16:47:50.762620 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.380833 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.421550 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.603609 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:51 crc kubenswrapper[4766]: I0130 16:47:51.603632 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6d5b8a42-39dd-4b1b-9f92-1e3585b6707b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.331797 4766 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podaca8dfc0-f915-4696-95c1-3c232f2ea35a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : Timed out while waiting for systemd to remove kubepods-besteffort-podaca8dfc0_f915_4696_95c1_3c232f2ea35a.slice" Jan 30 16:47:52 crc kubenswrapper[4766]: E0130 16:47:52.332237 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : unable to destroy cgroup paths for cgroup [kubepods besteffort podaca8dfc0-f915-4696-95c1-3c232f2ea35a] : Timed out while waiting for systemd to remove kubepods-besteffort-podaca8dfc0_f915_4696_95c1_3c232f2ea35a.slice" pod="openstack/cinder-api-0" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.345454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.389765 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:52 crc kubenswrapper[4766]: I0130 16:47:52.395338 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:47:53 crc kubenswrapper[4766]: I0130 16:47:53.352866 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5dgz" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" containerID="cri-o://bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" gracePeriod=2 Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.049863 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca8dfc0-f915-4696-95c1-3c232f2ea35a" path="/var/lib/kubelet/pods/aca8dfc0-f915-4696-95c1-3c232f2ea35a/volumes" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367198 4766 generic.go:334] "Generic (PLEG): container finished" podID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerID="bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" exitCode=0 Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367220 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a"} Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5dgz" event={"ID":"350bf3b6-f831-4bd0-a887-8f4b97e294aa","Type":"ContainerDied","Data":"8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64"} Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.367296 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e94e9010d3ddf2209ffde0d21db9289d0f351ce8caffb64e966f0bb2f18ce64" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.378548 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.488804 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.488975 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.489036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") pod \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\" (UID: \"350bf3b6-f831-4bd0-a887-8f4b97e294aa\") " Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.490539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities" (OuterVolumeSpecName: "utilities") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.494827 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7" (OuterVolumeSpecName: "kube-api-access-jdjs7") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "kube-api-access-jdjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.521040 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "350bf3b6-f831-4bd0-a887-8f4b97e294aa" (UID: "350bf3b6-f831-4bd0-a887-8f4b97e294aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591131 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjs7\" (UniqueName: \"kubernetes.io/projected/350bf3b6-f831-4bd0-a887-8f4b97e294aa-kube-api-access-jdjs7\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591209 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:54 crc kubenswrapper[4766]: I0130 16:47:54.591225 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bf3b6-f831-4bd0-a887-8f4b97e294aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.375677 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5dgz" Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.404164 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:55 crc kubenswrapper[4766]: I0130 16:47:55.413099 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5dgz"] Jan 30 16:47:56 crc kubenswrapper[4766]: I0130 16:47:56.048790 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" path="/var/lib/kubelet/pods/350bf3b6-f831-4bd0-a887-8f4b97e294aa/volumes" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045388 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045845 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.045900 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.046430 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.046484 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" gracePeriod=600 Jan 30 16:48:09 crc kubenswrapper[4766]: E0130 16:48:09.167787 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494223 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" exitCode=0 Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494270 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027"} Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494308 4766 scope.go:117] "RemoveContainer" containerID="401c81042a218118cfba77ecd472ad3789063907971964c9b9416c5db7f3d8ba" Jan 30 16:48:09 crc kubenswrapper[4766]: I0130 16:48:09.494888 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:09 crc kubenswrapper[4766]: E0130 16:48:09.495233 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:22 crc kubenswrapper[4766]: I0130 16:48:22.040093 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:22 crc kubenswrapper[4766]: E0130 16:48:22.040677 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:27 crc kubenswrapper[4766]: I0130 16:48:27.966237 4766 scope.go:117] "RemoveContainer" containerID="384add243e65cdf50e496a8167782257f5aa6061e63ba8e7a412091ee4ed18e7" Jan 30 16:48:27 crc kubenswrapper[4766]: I0130 16:48:27.994027 4766 scope.go:117] "RemoveContainer" containerID="3a0eaa2d691ae4d65e795c3996eb0ab131211168f3e378f7e5d301593d79afe7" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.019309 4766 scope.go:117] "RemoveContainer" containerID="996950689e39dcea64b26ccd476b24aa5095e91f7aed3e954e00b825f7630cc9" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.041948 4766 scope.go:117] "RemoveContainer" containerID="5d846068f29d3046551737a3e9e9cf0e1ed2259d3b638644a8119627f752a5bb" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.088331 4766 scope.go:117] "RemoveContainer" containerID="46dfb2a0af6dc1c92f20836420bf6bad9d95ad7a83767eb35ea5c22ee21a6991" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.110464 4766 scope.go:117] "RemoveContainer" containerID="ccba621742d68e9586276ff231a6fa1b8cc39d7109fc1db500072a77f2e0577a" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.140078 4766 scope.go:117] "RemoveContainer" containerID="5d73c2b655a052cf02654b11be29a35dfaa9dff493fdf53769ae78f9a9393392" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.169141 4766 scope.go:117] "RemoveContainer" containerID="16de9997b9c78a1addb7a6173a72d9c91cb7c20a2b569788c1ccd21789b937ba" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.199057 4766 scope.go:117] "RemoveContainer" containerID="b3115a74162c402b5afd67304852082bc2869cd8ceb2957889ed409ae79ee5a9" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.220120 4766 scope.go:117] "RemoveContainer" containerID="7bfe4866f66053fb173d427988627ec6e6f5d14c9ef1395833beafecd3414e5d" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.236992 4766 scope.go:117] "RemoveContainer" containerID="89fde9e0995894b317c9fa05cd0667cbf50e79b056befd3734c3ed716957dbe3" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.256437 4766 scope.go:117] "RemoveContainer" containerID="cc27ffe2d01636ffacab81d5d7a098bb9dc884b5c3f6289425d3f7eacfe02395" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.276463 4766 scope.go:117] "RemoveContainer" containerID="88d113226aeebb5db30f4f4f9b3c172c70a6fbe5baa221cf177cb6428428ba00" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.313932 4766 scope.go:117] "RemoveContainer" containerID="e66531f1ac1c7bb36e0303175964fac57e3e6bc53065d7b2dc2989ce9b3d088e" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.338154 4766 scope.go:117] "RemoveContainer" containerID="3126afd72a7e503d66c3abfdc8d12c8e5d1f45d05dcb98bf8bf9842b6dbab025" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.356126 4766 scope.go:117] "RemoveContainer" containerID="10c98f81e678691873d549baafc8dd66a2c7e23fa5f08a3d15b04d97e86b3c60" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.375578 4766 scope.go:117] "RemoveContainer" containerID="608ba2a26d2d587734c8a4f7540403d434c83f4f3e8dcb71158c93e46d824161" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.409454 4766 scope.go:117] "RemoveContainer" containerID="29b7ceb22d3dfe6928b75436b2b8db935b27d650279fb88c7e2bd402672ad8a8" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.427510 4766 scope.go:117] "RemoveContainer" containerID="8b6a5e00eb0e363beb4163ed64b109efdad6014e6d35f2b1358b2fb9057e6db4" Jan 30 16:48:28 crc kubenswrapper[4766]: I0130 16:48:28.449977 4766 scope.go:117] "RemoveContainer" containerID="2b053b03cd6fc4ae384ef42a3a1f67b2abeb432fc716aac5c95d03ae04affdd4" Jan 30 16:48:35 crc kubenswrapper[4766]: I0130 16:48:35.592821 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:35 crc kubenswrapper[4766]: E0130 16:48:35.594225 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:50 crc kubenswrapper[4766]: I0130 16:48:50.039530 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:48:50 crc kubenswrapper[4766]: E0130 16:48:50.040194 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253253 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253804 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253815 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253834 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253847 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253852 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253861 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-content" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253867 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-content" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253875 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253880 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253889 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253895 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253902 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server-init" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253908 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server-init" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253919 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253933 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253971 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.253983 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.253989 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254001 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254007 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254013 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254018 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254028 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254034 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254043 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254049 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254058 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254064 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254072 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254080 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254092 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254098 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254106 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-utilities" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254112 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="extract-utilities" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254122 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254128 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254138 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254144 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: E0130 16:48:55.254153 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254159 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254298 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-expirer" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254308 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovsdb-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254320 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254327 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="rsync" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254340 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254349 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-reaper" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254360 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254372 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="swift-recon-cron" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254381 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-auditor" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254391 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="350bf3b6-f831-4bd0-a887-8f4b97e294aa" containerName="registry-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254402 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254412 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254421 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254433 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-updater" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254442 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="container-server" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254450 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="account-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254461 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a501828-e06b-4096-b555-1ecd9323ee20" containerName="ovs-vswitchd" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.254475 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b182790-0761-450c-85d1-63ddd59ac10f" containerName="object-replicator" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.255469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.282082 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.355266 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.456964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.478478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"community-operators-b2xcg\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.576248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.872285 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:48:55 crc kubenswrapper[4766]: I0130 16:48:55.898997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerStarted","Data":"10469e338f5860f4a08b1149ed32667edce3f343f0e1a22ed8664ef3328f8240"} Jan 30 16:48:56 crc kubenswrapper[4766]: I0130 16:48:56.907837 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" exitCode=0 Jan 30 16:48:56 crc kubenswrapper[4766]: I0130 16:48:56.907892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb"} Jan 30 16:48:58 crc kubenswrapper[4766]: I0130 16:48:58.927305 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" exitCode=0 Jan 30 16:48:58 crc kubenswrapper[4766]: I0130 16:48:58.927410 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855"} Jan 30 16:48:59 crc kubenswrapper[4766]: I0130 16:48:59.935916 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerStarted","Data":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} Jan 30 16:48:59 crc kubenswrapper[4766]: I0130 16:48:59.957274 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2xcg" podStartSLOduration=2.401523901 podStartE2EDuration="4.957255746s" podCreationTimestamp="2026-01-30 16:48:55 +0000 UTC" firstStartedPulling="2026-01-30 16:48:56.909705985 +0000 UTC m=+1591.547663331" lastFinishedPulling="2026-01-30 16:48:59.46543783 +0000 UTC m=+1594.103395176" observedRunningTime="2026-01-30 16:48:59.953075679 +0000 UTC m=+1594.591033025" watchObservedRunningTime="2026-01-30 16:48:59.957255746 +0000 UTC m=+1594.595213082" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.039982 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:05 crc kubenswrapper[4766]: E0130 16:49:05.040516 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.576875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.576975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:05 crc kubenswrapper[4766]: I0130 16:49:05.625394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:06 crc kubenswrapper[4766]: I0130 16:49:06.013444 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:06 crc kubenswrapper[4766]: I0130 16:49:06.072000 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.001280 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2xcg" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" containerID="cri-o://c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" gracePeriod=2 Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.411521 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445484 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445571 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.445607 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") pod \"8acca189-bd24-494d-974b-062f9594b0c8\" (UID: \"8acca189-bd24-494d-974b-062f9594b0c8\") " Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.446996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities" (OuterVolumeSpecName: "utilities") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.453327 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw" (OuterVolumeSpecName: "kube-api-access-bc5jw") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "kube-api-access-bc5jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.505660 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8acca189-bd24-494d-974b-062f9594b0c8" (UID: "8acca189-bd24-494d-974b-062f9594b0c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548366 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548426 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc5jw\" (UniqueName: \"kubernetes.io/projected/8acca189-bd24-494d-974b-062f9594b0c8-kube-api-access-bc5jw\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:08 crc kubenswrapper[4766]: I0130 16:49:08.548442 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8acca189-bd24-494d-974b-062f9594b0c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015468 4766 generic.go:334] "Generic (PLEG): container finished" podID="8acca189-bd24-494d-974b-062f9594b0c8" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" exitCode=0 Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2xcg" event={"ID":"8acca189-bd24-494d-974b-062f9594b0c8","Type":"ContainerDied","Data":"10469e338f5860f4a08b1149ed32667edce3f343f0e1a22ed8664ef3328f8240"} Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015574 4766 scope.go:117] "RemoveContainer" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.015705 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2xcg" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.035245 4766 scope.go:117] "RemoveContainer" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.054430 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.061149 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2xcg"] Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.064820 4766 scope.go:117] "RemoveContainer" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.083853 4766 scope.go:117] "RemoveContainer" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.084448 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": container with ID starting with c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862 not found: ID does not exist" containerID="c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.084505 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862"} err="failed to get container status \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": rpc error: code = NotFound desc = could not find container \"c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862\": container with ID starting with c0d96825be792b1c6d5a779eb6182d08485c2e21d3c5cbab9f2636c0f3701862 not found: ID does not exist" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.084543 4766 scope.go:117] "RemoveContainer" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.084990 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": container with ID starting with 88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855 not found: ID does not exist" containerID="88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085037 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855"} err="failed to get container status \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": rpc error: code = NotFound desc = could not find container \"88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855\": container with ID starting with 88f1bddc8c3c91a270bf45d180299290c8c455d8e2c3292409a7348dc221f855 not found: ID does not exist" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085063 4766 scope.go:117] "RemoveContainer" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: E0130 16:49:09.085687 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": container with ID starting with 02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb not found: ID does not exist" containerID="02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb" Jan 30 16:49:09 crc kubenswrapper[4766]: I0130 16:49:09.085723 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb"} err="failed to get container status \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": rpc error: code = NotFound desc = could not find container \"02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb\": container with ID starting with 02067043650d85f94e947e0786fda5fdd828a0e4b3cbce844fa8cff64ab9dfdb not found: ID does not exist" Jan 30 16:49:10 crc kubenswrapper[4766]: I0130 16:49:10.047444 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acca189-bd24-494d-974b-062f9594b0c8" path="/var/lib/kubelet/pods/8acca189-bd24-494d-974b-062f9594b0c8/volumes" Jan 30 16:49:17 crc kubenswrapper[4766]: I0130 16:49:17.039001 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:17 crc kubenswrapper[4766]: E0130 16:49:17.041102 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.796203 4766 scope.go:117] "RemoveContainer" containerID="590619885e87e1a14deb1f9f567a37d743fd8966bf2a912bbf096d5bd9ef44b7" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.854701 4766 scope.go:117] "RemoveContainer" containerID="486e761914f506c8f715baf8a899185c1691423ce4dc1690c67bd2bf32714c57" Jan 30 16:49:28 crc kubenswrapper[4766]: I0130 16:49:28.966893 4766 scope.go:117] "RemoveContainer" containerID="fb2ca6c4c30cdfea0387f0737fa8335ebccfac0d91ab6a883ee48bb871ca5508" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.001444 4766 scope.go:117] "RemoveContainer" containerID="d472b2710d2b86d4d81d4fb6b931148f6dd0a1a2e9b155c00e350e8d497251f8" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.024658 4766 scope.go:117] "RemoveContainer" containerID="41ae1fdf6e3a258b7f3ba76000e1d22b3902137f00a4cd0b5ed0e97ffdf576d3" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.039757 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:29 crc kubenswrapper[4766]: E0130 16:49:29.040077 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.056685 4766 scope.go:117] "RemoveContainer" containerID="c109162953a72a45d6f1c14f847bc29a8241f51dc6338795a5b5a228252ba405" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.092387 4766 scope.go:117] "RemoveContainer" containerID="23f20e6f2114bc8f2119ea3e2aff96d54925d71ba01791ac4a7d732855922c81" Jan 30 16:49:29 crc kubenswrapper[4766]: I0130 16:49:29.119211 4766 scope.go:117] "RemoveContainer" containerID="05de0f2960640a1d96ef314bfdd72efd8f32f0b341093df6924e01cbf4898754" Jan 30 16:49:41 crc kubenswrapper[4766]: I0130 16:49:41.039333 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:41 crc kubenswrapper[4766]: E0130 16:49:41.040094 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:49:56 crc kubenswrapper[4766]: I0130 16:49:56.045983 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:49:56 crc kubenswrapper[4766]: E0130 16:49:56.046916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:10 crc kubenswrapper[4766]: I0130 16:50:10.040814 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:10 crc kubenswrapper[4766]: E0130 16:50:10.041592 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:22 crc kubenswrapper[4766]: I0130 16:50:22.039094 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:22 crc kubenswrapper[4766]: E0130 16:50:22.040107 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.297271 4766 scope.go:117] "RemoveContainer" containerID="ffb6abd846e3b8a61ca7c66fafb67111cf511533b90b2d4f5d986377b3dc5cfe" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.321342 4766 scope.go:117] "RemoveContainer" containerID="9307aab20bd3270327a754ce5f0bf1e56e353502d938552c29a20aa0ffc8654a" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.343473 4766 scope.go:117] "RemoveContainer" containerID="c614875e8dcd6859612c0ffca023d9ad703182eac04c4334607745a26ed492e7" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.363228 4766 scope.go:117] "RemoveContainer" containerID="a63129fee7968993f35cbb7b7849c29b9a1b79d14cad68020d591e8f586579b1" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.385419 4766 scope.go:117] "RemoveContainer" containerID="ffd3b38875d4c33ec892cb23c7ec536f295d1ae5853614ed528ebfd986790523" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.403431 4766 scope.go:117] "RemoveContainer" containerID="d026a97eccd46197ca4c58ce5cfec6afaefc72df68f93832ff6fb3ba15cfc040" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.426425 4766 scope.go:117] "RemoveContainer" containerID="894f0e780f43b16d39f549c963adf0e206c485f0cd403b0f3895c8cb5e61299b" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.451020 4766 scope.go:117] "RemoveContainer" containerID="0457579c3fc1a9ef824883cd41ddabdf9c479beff458b6eac6ddb0bd7fa49d24" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.467016 4766 scope.go:117] "RemoveContainer" containerID="ea43d9b31d9aa5149b7739b7621868cd96a13807e7953d198fd25510949afdca" Jan 30 16:50:29 crc kubenswrapper[4766]: I0130 16:50:29.484604 4766 scope.go:117] "RemoveContainer" containerID="abfc1996fe1de3fb5534b103074354ef84caf8f9b984c1f476a8f7df648534ed" Jan 30 16:50:35 crc kubenswrapper[4766]: I0130 16:50:35.040128 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:35 crc kubenswrapper[4766]: E0130 16:50:35.041013 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:50:46 crc kubenswrapper[4766]: I0130 16:50:46.046411 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:50:46 crc kubenswrapper[4766]: E0130 16:50:46.047195 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:00 crc kubenswrapper[4766]: I0130 16:51:00.039592 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:00 crc kubenswrapper[4766]: E0130 16:51:00.040291 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:14 crc kubenswrapper[4766]: I0130 16:51:14.039610 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:14 crc kubenswrapper[4766]: E0130 16:51:14.040335 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:27 crc kubenswrapper[4766]: I0130 16:51:27.039459 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:27 crc kubenswrapper[4766]: E0130 16:51:27.040156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:29 crc kubenswrapper[4766]: I0130 16:51:29.593568 4766 scope.go:117] "RemoveContainer" containerID="53abeb8a5618ddec5f224dfed1ba79dfbbd62eada83931393de17bebf2e1d5ab" Jan 30 16:51:41 crc kubenswrapper[4766]: I0130 16:51:41.039234 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:41 crc kubenswrapper[4766]: E0130 16:51:41.039933 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:51:53 crc kubenswrapper[4766]: I0130 16:51:53.038920 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:51:53 crc kubenswrapper[4766]: E0130 16:51:53.039689 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:07 crc kubenswrapper[4766]: I0130 16:52:07.039082 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:07 crc kubenswrapper[4766]: E0130 16:52:07.039816 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:21 crc kubenswrapper[4766]: I0130 16:52:21.039272 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:21 crc kubenswrapper[4766]: E0130 16:52:21.040004 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.664710 4766 scope.go:117] "RemoveContainer" containerID="89198eaaa434920b555079a794b492c6b89bd55b10487cc59b3d6ea529f6ecbf" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.683598 4766 scope.go:117] "RemoveContainer" containerID="1d1aebce59ff54c2cba777487e05b9692a4d8d12844694e6387583c2af634532" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.707776 4766 scope.go:117] "RemoveContainer" containerID="c231075c5dfb247437daaaeb176a6b0d3dea211afca691c38725b8939aa2480b" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.729316 4766 scope.go:117] "RemoveContainer" containerID="6416df1047fe308e33b040e08526583d0654fc7b7b0b8ca00590a24d666f84b7" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.748059 4766 scope.go:117] "RemoveContainer" containerID="244b298b75af4ffc60d556fb768c258be1dcf5b89d3142b104861f7e022ebee0" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.789896 4766 scope.go:117] "RemoveContainer" containerID="66e9bc5a59fbbe0d1e3626146e5f88333d931fe0fc8ec6bf9dc52c16d98e0f27" Jan 30 16:52:29 crc kubenswrapper[4766]: I0130 16:52:29.824669 4766 scope.go:117] "RemoveContainer" containerID="a9df41b3a8490f673ad155b5c39e9bf02895871bbd8788cd418cae112017c56d" Jan 30 16:52:33 crc kubenswrapper[4766]: I0130 16:52:33.040067 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:33 crc kubenswrapper[4766]: E0130 16:52:33.040579 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:52:48 crc kubenswrapper[4766]: I0130 16:52:48.040914 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:52:48 crc kubenswrapper[4766]: E0130 16:52:48.042265 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:53:02 crc kubenswrapper[4766]: I0130 16:53:02.039506 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:53:02 crc kubenswrapper[4766]: E0130 16:53:02.040382 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:53:14 crc kubenswrapper[4766]: I0130 16:53:14.039895 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:53:14 crc kubenswrapper[4766]: I0130 16:53:14.713053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.931502 4766 scope.go:117] "RemoveContainer" containerID="3934a8d326c2a8169efe654a399cd58d8c317187c849765e7f39b9c86a22d5e0" Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.958654 4766 scope.go:117] "RemoveContainer" containerID="3c7e5951b1b314e1a5b3490f28ca27b5bee52ad67a2efedf1cde2e1c8e97d6ab" Jan 30 16:54:29 crc kubenswrapper[4766]: I0130 16:54:29.987791 4766 scope.go:117] "RemoveContainer" containerID="bb2542f1624c71e872f5681c6672d1606fbeb6f074e817a27e9c2f3df9fbc43a" Jan 30 16:55:39 crc kubenswrapper[4766]: I0130 16:55:39.045887 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:55:39 crc kubenswrapper[4766]: I0130 16:55:39.046433 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:09 crc kubenswrapper[4766]: I0130 16:56:09.045090 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:56:09 crc kubenswrapper[4766]: I0130 16:56:09.045667 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.379350 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380282 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-content" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380296 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-content" Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380315 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380321 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: E0130 16:56:34.380334 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-utilities" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380341 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="extract-utilities" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.380479 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acca189-bd24-494d-974b-062f9594b0c8" containerName="registry-server" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.381428 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.389275 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.532654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634149 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.634936 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.660931 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"certified-operators-wpxvx\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:34 crc kubenswrapper[4766]: I0130 16:56:34.700536 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:35 crc kubenswrapper[4766]: I0130 16:56:35.226100 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.097855 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8" exitCode=0 Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.097996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8"} Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.098253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00"} Jan 30 16:56:36 crc kubenswrapper[4766]: I0130 16:56:36.101575 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:56:37 crc kubenswrapper[4766]: I0130 16:56:37.108091 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100"} Jan 30 16:56:38 crc kubenswrapper[4766]: I0130 16:56:38.117254 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100" exitCode=0 Jan 30 16:56:38 crc kubenswrapper[4766]: I0130 16:56:38.117337 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100"} Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045646 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045955 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.045995 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.046396 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.046455 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" gracePeriod=600 Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.126465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerStarted","Data":"fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a"} Jan 30 16:56:39 crc kubenswrapper[4766]: I0130 16:56:39.149850 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpxvx" podStartSLOduration=2.748112008 podStartE2EDuration="5.149827911s" podCreationTimestamp="2026-01-30 16:56:34 +0000 UTC" firstStartedPulling="2026-01-30 16:56:36.101381681 +0000 UTC m=+2050.739339027" lastFinishedPulling="2026-01-30 16:56:38.503097584 +0000 UTC m=+2053.141054930" observedRunningTime="2026-01-30 16:56:39.142636633 +0000 UTC m=+2053.780593989" watchObservedRunningTime="2026-01-30 16:56:39.149827911 +0000 UTC m=+2053.787785257" Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.135962 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" exitCode=0 Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136042 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872"} Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} Jan 30 16:56:40 crc kubenswrapper[4766]: I0130 16:56:40.136918 4766 scope.go:117] "RemoveContainer" containerID="d754c53961dd92b39433a4ade8bd484064163d9f01082dc2421d03e086c5b027" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.701017 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.701720 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:44 crc kubenswrapper[4766]: I0130 16:56:44.746736 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:45 crc kubenswrapper[4766]: I0130 16:56:45.253009 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:45 crc kubenswrapper[4766]: I0130 16:56:45.331559 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:47 crc kubenswrapper[4766]: I0130 16:56:47.193642 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wpxvx" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" containerID="cri-o://fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" gracePeriod=2 Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.206778 4766 generic.go:334] "Generic (PLEG): container finished" podID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerID="fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" exitCode=0 Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.206842 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a"} Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.207321 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpxvx" event={"ID":"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3","Type":"ContainerDied","Data":"6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00"} Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.207374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2d0f7be86b3c67cd6f21bd74e8e22e8c307143254904474497446b1ffc7a00" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.211668 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333000 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333086 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.333213 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") pod \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\" (UID: \"fcc1e6c0-a32a-4e87-9073-66f9e0107fe3\") " Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.334233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities" (OuterVolumeSpecName: "utilities") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.340769 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz" (OuterVolumeSpecName: "kube-api-access-l58wz") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "kube-api-access-l58wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.434397 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l58wz\" (UniqueName: \"kubernetes.io/projected/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-kube-api-access-l58wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.434434 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:48 crc kubenswrapper[4766]: I0130 16:56:48.944019 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" (UID: "fcc1e6c0-a32a-4e87-9073-66f9e0107fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.042514 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.214360 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpxvx" Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.259305 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:49 crc kubenswrapper[4766]: I0130 16:56:49.267106 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wpxvx"] Jan 30 16:56:50 crc kubenswrapper[4766]: I0130 16:56:50.054154 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" path="/var/lib/kubelet/pods/fcc1e6c0-a32a-4e87-9073-66f9e0107fe3/volumes" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.790226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791170 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-utilities" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791199 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-utilities" Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791218 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791230 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: E0130 16:57:40.791250 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-content" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791258 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="extract-content" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.791410 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc1e6c0-a32a-4e87-9073-66f9e0107fe3" containerName="registry-server" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.792411 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.806755 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873331 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873433 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.873622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975538 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975621 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.975647 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.976213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:40 crc kubenswrapper[4766]: I0130 16:57:40.976355 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.002619 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"redhat-marketplace-sl26v\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.113418 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:41 crc kubenswrapper[4766]: I0130 16:57:41.615463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.580252 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4" exitCode=0 Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.580355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4"} Jan 30 16:57:42 crc kubenswrapper[4766]: I0130 16:57:42.582330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerStarted","Data":"ccdd5bdfa52ee79edb7f6774eaa13904e61d886e41d076b7148081f587c764b4"} Jan 30 16:57:43 crc kubenswrapper[4766]: I0130 16:57:43.593585 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8" exitCode=0 Jan 30 16:57:43 crc kubenswrapper[4766]: I0130 16:57:43.593678 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8"} Jan 30 16:57:44 crc kubenswrapper[4766]: I0130 16:57:44.602848 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerStarted","Data":"b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f"} Jan 30 16:57:44 crc kubenswrapper[4766]: I0130 16:57:44.623848 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl26v" podStartSLOduration=3.212726439 podStartE2EDuration="4.623831938s" podCreationTimestamp="2026-01-30 16:57:40 +0000 UTC" firstStartedPulling="2026-01-30 16:57:42.582273907 +0000 UTC m=+2117.220231253" lastFinishedPulling="2026-01-30 16:57:43.993379406 +0000 UTC m=+2118.631336752" observedRunningTime="2026-01-30 16:57:44.618641876 +0000 UTC m=+2119.256599232" watchObservedRunningTime="2026-01-30 16:57:44.623831938 +0000 UTC m=+2119.261789284" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.114443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.115002 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.156226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:51 crc kubenswrapper[4766]: I0130 16:57:51.685114 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:52 crc kubenswrapper[4766]: I0130 16:57:52.514491 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:53 crc kubenswrapper[4766]: I0130 16:57:53.661322 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl26v" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" containerID="cri-o://b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" gracePeriod=2 Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.670797 4766 generic.go:334] "Generic (PLEG): container finished" podID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerID="b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" exitCode=0 Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.671152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f"} Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.783162 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877559 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.877672 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") pod \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\" (UID: \"4fa1d02b-4884-4bcd-ba71-4b69e1671d30\") " Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.878830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities" (OuterVolumeSpecName: "utilities") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.883518 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff" (OuterVolumeSpecName: "kube-api-access-btzff") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "kube-api-access-btzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.906918 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa1d02b-4884-4bcd-ba71-4b69e1671d30" (UID: "4fa1d02b-4884-4bcd-ba71-4b69e1671d30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979893 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979954 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:54 crc kubenswrapper[4766]: I0130 16:57:54.979981 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btzff\" (UniqueName: \"kubernetes.io/projected/4fa1d02b-4884-4bcd-ba71-4b69e1671d30-kube-api-access-btzff\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121563 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-content" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121920 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-content" Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121932 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121940 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: E0130 16:57:55.121965 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-utilities" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.121973 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="extract-utilities" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.122172 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" containerName="registry-server" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.123379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.133765 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182147 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182222 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.182252 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283411 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283558 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.283612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.284055 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.284147 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.302654 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"redhat-operators-lbvcl\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.452084 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl26v" event={"ID":"4fa1d02b-4884-4bcd-ba71-4b69e1671d30","Type":"ContainerDied","Data":"ccdd5bdfa52ee79edb7f6774eaa13904e61d886e41d076b7148081f587c764b4"} Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679065 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl26v" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.679079 4766 scope.go:117] "RemoveContainer" containerID="b0d27c7cc97e9c50a66b428072c19e09a06ff5634d819fdde93e235c786f8d2f" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.721818 4766 scope.go:117] "RemoveContainer" containerID="0b46c073450b9756e49d90514c67e16f5190e7096915555a1e1ddc39bf8742c8" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.722588 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.739985 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl26v"] Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.756875 4766 scope.go:117] "RemoveContainer" containerID="739174ead9c392fe0dd2c0f53bf7ea422a402253a0952d03bf93603427e19cc4" Jan 30 16:57:55 crc kubenswrapper[4766]: I0130 16:57:55.940306 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.049642 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa1d02b-4884-4bcd-ba71-4b69e1671d30" path="/var/lib/kubelet/pods/4fa1d02b-4884-4bcd-ba71-4b69e1671d30/volumes" Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704196 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9" exitCode=0 Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9"} Jan 30 16:57:56 crc kubenswrapper[4766]: I0130 16:57:56.704520 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"1b170da8fc570bba1de5c18062ea65fa9bbbbb36c3da01230677781c904c66f0"} Jan 30 16:57:57 crc kubenswrapper[4766]: I0130 16:57:57.716045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f"} Jan 30 16:57:58 crc kubenswrapper[4766]: I0130 16:57:58.725726 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f" exitCode=0 Jan 30 16:57:58 crc kubenswrapper[4766]: I0130 16:57:58.725806 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f"} Jan 30 16:57:59 crc kubenswrapper[4766]: I0130 16:57:59.758639 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerStarted","Data":"07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751"} Jan 30 16:57:59 crc kubenswrapper[4766]: I0130 16:57:59.785785 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbvcl" podStartSLOduration=2.386991377 podStartE2EDuration="4.785763423s" podCreationTimestamp="2026-01-30 16:57:55 +0000 UTC" firstStartedPulling="2026-01-30 16:57:56.707334547 +0000 UTC m=+2131.345291893" lastFinishedPulling="2026-01-30 16:57:59.106106603 +0000 UTC m=+2133.744063939" observedRunningTime="2026-01-30 16:57:59.779401487 +0000 UTC m=+2134.417358833" watchObservedRunningTime="2026-01-30 16:57:59.785763423 +0000 UTC m=+2134.423720779" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.452848 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.453216 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.502039 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:05 crc kubenswrapper[4766]: I0130 16:58:05.842969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.109768 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.110452 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lbvcl" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" containerID="cri-o://07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" gracePeriod=2 Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.824463 4766 generic.go:334] "Generic (PLEG): container finished" podID="6890c084-11c8-4290-86ee-2fb441a2b063" containerID="07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" exitCode=0 Jan 30 16:58:09 crc kubenswrapper[4766]: I0130 16:58:09.824531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751"} Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.083216 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196815 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.196969 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") pod \"6890c084-11c8-4290-86ee-2fb441a2b063\" (UID: \"6890c084-11c8-4290-86ee-2fb441a2b063\") " Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.197658 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities" (OuterVolumeSpecName: "utilities") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.202554 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2" (OuterVolumeSpecName: "kube-api-access-dwvh2") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "kube-api-access-dwvh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.298529 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvh2\" (UniqueName: \"kubernetes.io/projected/6890c084-11c8-4290-86ee-2fb441a2b063-kube-api-access-dwvh2\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.298562 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.320799 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6890c084-11c8-4290-86ee-2fb441a2b063" (UID: "6890c084-11c8-4290-86ee-2fb441a2b063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.400080 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6890c084-11c8-4290-86ee-2fb441a2b063-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbvcl" event={"ID":"6890c084-11c8-4290-86ee-2fb441a2b063","Type":"ContainerDied","Data":"1b170da8fc570bba1de5c18062ea65fa9bbbbb36c3da01230677781c904c66f0"} Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836751 4766 scope.go:117] "RemoveContainer" containerID="07dc4015a28e7998d766fc454b1bfbaf1c839f1de0e8998d644294ab33b29751" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.836871 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbvcl" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.857376 4766 scope.go:117] "RemoveContainer" containerID="18de966600049282368a8bad3c0e760ae11cf68ff7265b22c312d56a7faefa2f" Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.874936 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.880711 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lbvcl"] Jan 30 16:58:10 crc kubenswrapper[4766]: I0130 16:58:10.907798 4766 scope.go:117] "RemoveContainer" containerID="f2423a0776230d6cb57f6a986310385cc1e6bf3dd436375b29a2992f3b112ca9" Jan 30 16:58:12 crc kubenswrapper[4766]: I0130 16:58:12.048672 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" path="/var/lib/kubelet/pods/6890c084-11c8-4290-86ee-2fb441a2b063/volumes" Jan 30 16:58:39 crc kubenswrapper[4766]: I0130 16:58:39.045100 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:58:39 crc kubenswrapper[4766]: I0130 16:58:39.045627 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.045613 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.046288 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.859846 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860244 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-content" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860262 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-content" Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860273 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860281 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: E0130 16:59:09.860308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-utilities" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860319 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="extract-utilities" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.860521 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6890c084-11c8-4290-86ee-2fb441a2b063" containerName="registry-server" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.861739 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:09 crc kubenswrapper[4766]: I0130 16:59:09.869641 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.009492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.010222 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.010321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.112251 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.113278 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.113524 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.138379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"community-operators-9869j\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.183908 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:10 crc kubenswrapper[4766]: I0130 16:59:10.679971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212704 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" exitCode=0 Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212758 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43"} Jan 30 16:59:11 crc kubenswrapper[4766]: I0130 16:59:11.212788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"f11f3d79fd87671cc27f1787b9c35d3fc4e26257bf6aaca3cfab79e3d4d29c01"} Jan 30 16:59:12 crc kubenswrapper[4766]: I0130 16:59:12.222430 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} Jan 30 16:59:13 crc kubenswrapper[4766]: I0130 16:59:13.230162 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" exitCode=0 Jan 30 16:59:13 crc kubenswrapper[4766]: I0130 16:59:13.230307 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} Jan 30 16:59:14 crc kubenswrapper[4766]: I0130 16:59:14.240406 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerStarted","Data":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} Jan 30 16:59:14 crc kubenswrapper[4766]: I0130 16:59:14.258873 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9869j" podStartSLOduration=2.824696718 podStartE2EDuration="5.258852101s" podCreationTimestamp="2026-01-30 16:59:09 +0000 UTC" firstStartedPulling="2026-01-30 16:59:11.215314807 +0000 UTC m=+2205.853272153" lastFinishedPulling="2026-01-30 16:59:13.64947019 +0000 UTC m=+2208.287427536" observedRunningTime="2026-01-30 16:59:14.255879039 +0000 UTC m=+2208.893836385" watchObservedRunningTime="2026-01-30 16:59:14.258852101 +0000 UTC m=+2208.896809467" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.185579 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.185877 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.234196 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.329388 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:20 crc kubenswrapper[4766]: I0130 16:59:20.468350 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.305710 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9869j" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" containerID="cri-o://4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" gracePeriod=2 Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.715403 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.914440 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.914510 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.915362 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities" (OuterVolumeSpecName: "utilities") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.915533 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") pod \"cfad3300-7036-4130-8d07-49650b704e5d\" (UID: \"cfad3300-7036-4130-8d07-49650b704e5d\") " Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.916069 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.919801 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72" (OuterVolumeSpecName: "kube-api-access-qhf72") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "kube-api-access-qhf72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:22 crc kubenswrapper[4766]: I0130 16:59:22.972983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfad3300-7036-4130-8d07-49650b704e5d" (UID: "cfad3300-7036-4130-8d07-49650b704e5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.016663 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhf72\" (UniqueName: \"kubernetes.io/projected/cfad3300-7036-4130-8d07-49650b704e5d-kube-api-access-qhf72\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.016702 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfad3300-7036-4130-8d07-49650b704e5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315529 4766 generic.go:334] "Generic (PLEG): container finished" podID="cfad3300-7036-4130-8d07-49650b704e5d" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" exitCode=0 Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315571 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315597 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9869j" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315622 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9869j" event={"ID":"cfad3300-7036-4130-8d07-49650b704e5d","Type":"ContainerDied","Data":"f11f3d79fd87671cc27f1787b9c35d3fc4e26257bf6aaca3cfab79e3d4d29c01"} Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.315640 4766 scope.go:117] "RemoveContainer" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.335092 4766 scope.go:117] "RemoveContainer" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.346067 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.353975 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9869j"] Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.372124 4766 scope.go:117] "RemoveContainer" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.386758 4766 scope.go:117] "RemoveContainer" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.387435 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": container with ID starting with 4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376 not found: ID does not exist" containerID="4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387479 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376"} err="failed to get container status \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": rpc error: code = NotFound desc = could not find container \"4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376\": container with ID starting with 4ada81882c93cacd4a461051e6022fc328f13ba528555aa9ab47ebc2a7365376 not found: ID does not exist" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387507 4766 scope.go:117] "RemoveContainer" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.387926 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": container with ID starting with 98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6 not found: ID does not exist" containerID="98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.387978 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6"} err="failed to get container status \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": rpc error: code = NotFound desc = could not find container \"98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6\": container with ID starting with 98296e4d03f17aab9627df71a2bbc74582e1ca9f572b5d5c47e79c543bced9d6 not found: ID does not exist" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.388002 4766 scope.go:117] "RemoveContainer" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: E0130 16:59:23.388426 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": container with ID starting with f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43 not found: ID does not exist" containerID="f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43" Jan 30 16:59:23 crc kubenswrapper[4766]: I0130 16:59:23.388452 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43"} err="failed to get container status \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": rpc error: code = NotFound desc = could not find container \"f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43\": container with ID starting with f7f46bc1bf93df440c0e44324ea2e81685d5604b0ad2aade6c25ed6c9cce2f43 not found: ID does not exist" Jan 30 16:59:24 crc kubenswrapper[4766]: I0130 16:59:24.050664 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfad3300-7036-4130-8d07-49650b704e5d" path="/var/lib/kubelet/pods/cfad3300-7036-4130-8d07-49650b704e5d/volumes" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.045682 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046114 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046152 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046684 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.046735 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" gracePeriod=600 Jan 30 16:59:39 crc kubenswrapper[4766]: E0130 16:59:39.174191 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418656 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" exitCode=0 Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484"} Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.418728 4766 scope.go:117] "RemoveContainer" containerID="00fe48cd7fae11d07bb44a4d280259adee43debb3566040b546b0f1eb6622872" Jan 30 16:59:39 crc kubenswrapper[4766]: I0130 16:59:39.419297 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 16:59:39 crc kubenswrapper[4766]: E0130 16:59:39.419533 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 16:59:54 crc kubenswrapper[4766]: I0130 16:59:54.039831 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 16:59:54 crc kubenswrapper[4766]: E0130 16:59:54.041581 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142006 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142679 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142697 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142738 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-content" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142748 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-content" Jan 30 17:00:00 crc kubenswrapper[4766]: E0130 17:00:00.142763 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-utilities" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.142789 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="extract-utilities" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.143001 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfad3300-7036-4130-8d07-49650b704e5d" containerName="registry-server" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.143613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145279 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145387 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.145832 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.146140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.155024 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246323 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.246456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.247789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.253641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.262006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"collect-profiles-29496540-qfpbr\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.466970 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:00 crc kubenswrapper[4766]: I0130 17:00:00.890948 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576317 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerID="d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91" exitCode=0 Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576386 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerDied","Data":"d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91"} Jan 30 17:00:01 crc kubenswrapper[4766]: I0130 17:00:01.576602 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerStarted","Data":"81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c"} Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.824353 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.985554 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") pod \"3d00d929-3c4f-4555-b75b-a39750dc609b\" (UID: \"3d00d929-3c4f-4555-b75b-a39750dc609b\") " Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.986514 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.991727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4766]: I0130 17:00:02.992046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6" (OuterVolumeSpecName: "kube-api-access-sthx6") pod "3d00d929-3c4f-4555-b75b-a39750dc609b" (UID: "3d00d929-3c4f-4555-b75b-a39750dc609b"). InnerVolumeSpecName "kube-api-access-sthx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087107 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthx6\" (UniqueName: \"kubernetes.io/projected/3d00d929-3c4f-4555-b75b-a39750dc609b-kube-api-access-sthx6\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087147 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d00d929-3c4f-4555-b75b-a39750dc609b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.087159 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d00d929-3c4f-4555-b75b-a39750dc609b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" event={"ID":"3d00d929-3c4f-4555-b75b-a39750dc609b","Type":"ContainerDied","Data":"81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c"} Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594440 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81de0c48b6bc80193f93e6c5fa1672a7ec5bfe016ac85fdc034c9958de81096c" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.594502 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr" Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.887987 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 17:00:03 crc kubenswrapper[4766]: I0130 17:00:03.892757 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-brtsv"] Jan 30 17:00:04 crc kubenswrapper[4766]: I0130 17:00:04.050447 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08038447-8cce-4cea-9ef9-f7dbcce48697" path="/var/lib/kubelet/pods/08038447-8cce-4cea-9ef9-f7dbcce48697/volumes" Jan 30 17:00:05 crc kubenswrapper[4766]: I0130 17:00:05.038945 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:05 crc kubenswrapper[4766]: E0130 17:00:05.039463 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:16 crc kubenswrapper[4766]: I0130 17:00:16.045072 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:16 crc kubenswrapper[4766]: E0130 17:00:16.045940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:28 crc kubenswrapper[4766]: I0130 17:00:28.039111 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:28 crc kubenswrapper[4766]: E0130 17:00:28.039835 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:30 crc kubenswrapper[4766]: I0130 17:00:30.130333 4766 scope.go:117] "RemoveContainer" containerID="b112e3544153b7e8a93c7abc5b6cc98c8d5d4abc22a87cb47302149bba9f4cfe" Jan 30 17:00:40 crc kubenswrapper[4766]: I0130 17:00:40.039470 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:40 crc kubenswrapper[4766]: E0130 17:00:40.040541 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:00:53 crc kubenswrapper[4766]: I0130 17:00:53.040513 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:00:53 crc kubenswrapper[4766]: E0130 17:00:53.041374 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:05 crc kubenswrapper[4766]: I0130 17:01:05.040358 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:05 crc kubenswrapper[4766]: E0130 17:01:05.041628 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:19 crc kubenswrapper[4766]: I0130 17:01:19.039782 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:19 crc kubenswrapper[4766]: E0130 17:01:19.040505 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:31 crc kubenswrapper[4766]: I0130 17:01:31.039902 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:31 crc kubenswrapper[4766]: E0130 17:01:31.040700 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:45 crc kubenswrapper[4766]: I0130 17:01:45.039891 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:45 crc kubenswrapper[4766]: E0130 17:01:45.041672 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:01:58 crc kubenswrapper[4766]: I0130 17:01:58.039766 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:01:58 crc kubenswrapper[4766]: E0130 17:01:58.040799 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:11 crc kubenswrapper[4766]: I0130 17:02:11.039445 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:11 crc kubenswrapper[4766]: E0130 17:02:11.040380 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:23 crc kubenswrapper[4766]: I0130 17:02:23.040391 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:23 crc kubenswrapper[4766]: E0130 17:02:23.041205 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:36 crc kubenswrapper[4766]: I0130 17:02:36.045246 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:36 crc kubenswrapper[4766]: E0130 17:02:36.046060 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:02:49 crc kubenswrapper[4766]: I0130 17:02:49.039293 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:02:49 crc kubenswrapper[4766]: E0130 17:02:49.040092 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:00 crc kubenswrapper[4766]: I0130 17:03:00.040116 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:00 crc kubenswrapper[4766]: E0130 17:03:00.040900 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:11 crc kubenswrapper[4766]: I0130 17:03:11.039742 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:11 crc kubenswrapper[4766]: E0130 17:03:11.040623 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:23 crc kubenswrapper[4766]: I0130 17:03:23.039674 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:23 crc kubenswrapper[4766]: E0130 17:03:23.040389 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.217006 4766 scope.go:117] "RemoveContainer" containerID="fd37f3cb692fbe2bbeb024aee6c952dc0d0a87c68386d738a8fdaa9dd9d8595a" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.236633 4766 scope.go:117] "RemoveContainer" containerID="06b745d56a0ea7fc12ca81d2c9ba2f319ffff14bd56e607e281e0645c4942100" Jan 30 17:03:30 crc kubenswrapper[4766]: I0130 17:03:30.254359 4766 scope.go:117] "RemoveContainer" containerID="969b3d679aa240cd47b159585dba7aa8a23d90c785984a235cf0e91061c4a1a8" Jan 30 17:03:36 crc kubenswrapper[4766]: I0130 17:03:36.042779 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:36 crc kubenswrapper[4766]: E0130 17:03:36.043385 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:03:51 crc kubenswrapper[4766]: I0130 17:03:51.039479 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:03:51 crc kubenswrapper[4766]: E0130 17:03:51.040216 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:05 crc kubenswrapper[4766]: I0130 17:04:05.039951 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:05 crc kubenswrapper[4766]: E0130 17:04:05.040668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:18 crc kubenswrapper[4766]: I0130 17:04:18.039681 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:18 crc kubenswrapper[4766]: E0130 17:04:18.040456 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:32 crc kubenswrapper[4766]: I0130 17:04:32.038936 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:32 crc kubenswrapper[4766]: E0130 17:04:32.039575 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:04:43 crc kubenswrapper[4766]: I0130 17:04:43.039886 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:04:43 crc kubenswrapper[4766]: I0130 17:04:43.500462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} Jan 30 17:07:09 crc kubenswrapper[4766]: I0130 17:07:09.045343 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:09 crc kubenswrapper[4766]: I0130 17:07:09.045905 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.051501 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:24 crc kubenswrapper[4766]: E0130 17:07:24.052488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.052523 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.052661 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" containerName="collect-profiles" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.055218 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.065405 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.158804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.159088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.159140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.261569 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.262032 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.262098 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.280816 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"certified-operators-6sj4c\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.409683 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:24 crc kubenswrapper[4766]: I0130 17:07:24.924865 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552753 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" exitCode=0 Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552804 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408"} Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.552845 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"f76872b006034beb8012688a1eaf0f28f86663996b79ddf0dfcafacdcbde543f"} Jan 30 17:07:25 crc kubenswrapper[4766]: I0130 17:07:25.554648 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:07:26 crc kubenswrapper[4766]: I0130 17:07:26.568232 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} Jan 30 17:07:27 crc kubenswrapper[4766]: I0130 17:07:27.577547 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" exitCode=0 Jan 30 17:07:27 crc kubenswrapper[4766]: I0130 17:07:27.577660 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} Jan 30 17:07:29 crc kubenswrapper[4766]: I0130 17:07:29.594944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerStarted","Data":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} Jan 30 17:07:29 crc kubenswrapper[4766]: I0130 17:07:29.622870 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6sj4c" podStartSLOduration=2.825000481 podStartE2EDuration="5.622851342s" podCreationTimestamp="2026-01-30 17:07:24 +0000 UTC" firstStartedPulling="2026-01-30 17:07:25.55441188 +0000 UTC m=+2700.192369226" lastFinishedPulling="2026-01-30 17:07:28.352262741 +0000 UTC m=+2702.990220087" observedRunningTime="2026-01-30 17:07:29.617977328 +0000 UTC m=+2704.255934684" watchObservedRunningTime="2026-01-30 17:07:29.622851342 +0000 UTC m=+2704.260808688" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.410001 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.410669 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.456091 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.677401 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:34 crc kubenswrapper[4766]: I0130 17:07:34.739892 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:36 crc kubenswrapper[4766]: I0130 17:07:36.640533 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6sj4c" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" containerID="cri-o://fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" gracePeriod=2 Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.032969 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.142614 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.142747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.143042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") pod \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\" (UID: \"dcbe56d8-9a5b-4234-9031-a67f1cd65a33\") " Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.144945 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities" (OuterVolumeSpecName: "utilities") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.152498 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8" (OuterVolumeSpecName: "kube-api-access-zn9c8") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "kube-api-access-zn9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.194098 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcbe56d8-9a5b-4234-9031-a67f1cd65a33" (UID: "dcbe56d8-9a5b-4234-9031-a67f1cd65a33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244555 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn9c8\" (UniqueName: \"kubernetes.io/projected/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-kube-api-access-zn9c8\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244601 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.244612 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbe56d8-9a5b-4234-9031-a67f1cd65a33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.657901 4766 generic.go:334] "Generic (PLEG): container finished" podID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" exitCode=0 Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.657953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658000 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6sj4c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658021 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6sj4c" event={"ID":"dcbe56d8-9a5b-4234-9031-a67f1cd65a33","Type":"ContainerDied","Data":"f76872b006034beb8012688a1eaf0f28f86663996b79ddf0dfcafacdcbde543f"} Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.658048 4766 scope.go:117] "RemoveContainer" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.677816 4766 scope.go:117] "RemoveContainer" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.709204 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.716485 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6sj4c"] Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.724584 4766 scope.go:117] "RemoveContainer" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.739817 4766 scope.go:117] "RemoveContainer" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.740259 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": container with ID starting with fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243 not found: ID does not exist" containerID="fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740303 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243"} err="failed to get container status \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": rpc error: code = NotFound desc = could not find container \"fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243\": container with ID starting with fa083eec63b2589c0ee11adeb966cdda1361d7471401b63f29d64ec2bc2e3243 not found: ID does not exist" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740330 4766 scope.go:117] "RemoveContainer" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.740713 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": container with ID starting with 2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c not found: ID does not exist" containerID="2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740748 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c"} err="failed to get container status \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": rpc error: code = NotFound desc = could not find container \"2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c\": container with ID starting with 2704ea2ea676deded11d6878f000f4cd94bbf9dc2ff88f62199312004282d75c not found: ID does not exist" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.740768 4766 scope.go:117] "RemoveContainer" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: E0130 17:07:37.741336 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": container with ID starting with ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408 not found: ID does not exist" containerID="ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408" Jan 30 17:07:37 crc kubenswrapper[4766]: I0130 17:07:37.741376 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408"} err="failed to get container status \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": rpc error: code = NotFound desc = could not find container \"ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408\": container with ID starting with ce7ecf7ce1672182b243c55e1be02164403a93392999cdb798c54eb71898e408 not found: ID does not exist" Jan 30 17:07:38 crc kubenswrapper[4766]: I0130 17:07:38.052308 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" path="/var/lib/kubelet/pods/dcbe56d8-9a5b-4234-9031-a67f1cd65a33/volumes" Jan 30 17:07:39 crc kubenswrapper[4766]: I0130 17:07:39.045628 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:39 crc kubenswrapper[4766]: I0130 17:07:39.045978 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.045692 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046086 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046129 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046820 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.046899 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" gracePeriod=600 Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888086 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" exitCode=0 Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888200 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a"} Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888743 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} Jan 30 17:08:09 crc kubenswrapper[4766]: I0130 17:08:09.888769 4766 scope.go:117] "RemoveContainer" containerID="b9d05600caa51ab1651a81b6d711e79ba7c41b6988186880a3f395a3c06b7484" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.899261 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900109 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-content" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900124 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-content" Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900134 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900141 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: E0130 17:08:43.900162 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-utilities" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900170 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="extract-utilities" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.900355 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe56d8-9a5b-4234-9031-a67f1cd65a33" containerName="registry-server" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.901414 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:43 crc kubenswrapper[4766]: I0130 17:08:43.911811 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024822 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024925 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.024970 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126280 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126387 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.126973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.151135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"redhat-marketplace-zgxw8\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.220659 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:44 crc kubenswrapper[4766]: I0130 17:08:44.667504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156329 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" exitCode=0 Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f"} Jan 30 17:08:45 crc kubenswrapper[4766]: I0130 17:08:45.156669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerStarted","Data":"cf0f23cc7044c135f42645d5c53ead018194659143f6d3b2e787f14109e47195"} Jan 30 17:08:47 crc kubenswrapper[4766]: I0130 17:08:47.176285 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" exitCode=0 Jan 30 17:08:47 crc kubenswrapper[4766]: I0130 17:08:47.176324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338"} Jan 30 17:08:48 crc kubenswrapper[4766]: I0130 17:08:48.189599 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerStarted","Data":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} Jan 30 17:08:48 crc kubenswrapper[4766]: I0130 17:08:48.210737 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgxw8" podStartSLOduration=2.80103077 podStartE2EDuration="5.210718946s" podCreationTimestamp="2026-01-30 17:08:43 +0000 UTC" firstStartedPulling="2026-01-30 17:08:45.159988282 +0000 UTC m=+2779.797945678" lastFinishedPulling="2026-01-30 17:08:47.569676508 +0000 UTC m=+2782.207633854" observedRunningTime="2026-01-30 17:08:48.208088386 +0000 UTC m=+2782.846045732" watchObservedRunningTime="2026-01-30 17:08:48.210718946 +0000 UTC m=+2782.848676292" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.221646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.222445 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.266376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.306005 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:54 crc kubenswrapper[4766]: I0130 17:08:54.497865 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.252078 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgxw8" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" containerID="cri-o://8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" gracePeriod=2 Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.643802 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808508 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808657 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.808687 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") pod \"0857e092-05eb-4415-bd8b-c133565af044\" (UID: \"0857e092-05eb-4415-bd8b-c133565af044\") " Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.809617 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities" (OuterVolumeSpecName: "utilities") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.816344 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt" (OuterVolumeSpecName: "kube-api-access-x24dt") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "kube-api-access-x24dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.848632 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0857e092-05eb-4415-bd8b-c133565af044" (UID: "0857e092-05eb-4415-bd8b-c133565af044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909825 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909868 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857e092-05eb-4415-bd8b-c133565af044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:56 crc kubenswrapper[4766]: I0130 17:08:56.909914 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24dt\" (UniqueName: \"kubernetes.io/projected/0857e092-05eb-4415-bd8b-c133565af044-kube-api-access-x24dt\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261619 4766 generic.go:334] "Generic (PLEG): container finished" podID="0857e092-05eb-4415-bd8b-c133565af044" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" exitCode=0 Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261686 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgxw8" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.261672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.262350 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgxw8" event={"ID":"0857e092-05eb-4415-bd8b-c133565af044","Type":"ContainerDied","Data":"cf0f23cc7044c135f42645d5c53ead018194659143f6d3b2e787f14109e47195"} Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.262380 4766 scope.go:117] "RemoveContainer" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.278672 4766 scope.go:117] "RemoveContainer" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.298492 4766 scope.go:117] "RemoveContainer" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.307874 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.321760 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgxw8"] Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.339589 4766 scope.go:117] "RemoveContainer" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340070 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": container with ID starting with 8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a not found: ID does not exist" containerID="8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340113 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a"} err="failed to get container status \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": rpc error: code = NotFound desc = could not find container \"8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a\": container with ID starting with 8a8c66a79a508c8a5249681e1d28617810e28953550a31b1e7bf9f121497485a not found: ID does not exist" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340136 4766 scope.go:117] "RemoveContainer" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340539 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": container with ID starting with bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338 not found: ID does not exist" containerID="bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340612 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338"} err="failed to get container status \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": rpc error: code = NotFound desc = could not find container \"bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338\": container with ID starting with bf810bb6f7501ef9039b720c188b1a96b7e5e4a575b9cd4c68f8e6be67abf338 not found: ID does not exist" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340655 4766 scope.go:117] "RemoveContainer" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: E0130 17:08:57.340951 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": container with ID starting with 9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f not found: ID does not exist" containerID="9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f" Jan 30 17:08:57 crc kubenswrapper[4766]: I0130 17:08:57.340977 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f"} err="failed to get container status \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": rpc error: code = NotFound desc = could not find container \"9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f\": container with ID starting with 9dbc09c417ff84604825848895701855cf762a5163a50678b08ab023b2ee8a7f not found: ID does not exist" Jan 30 17:08:58 crc kubenswrapper[4766]: I0130 17:08:58.053068 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857e092-05eb-4415-bd8b-c133565af044" path="/var/lib/kubelet/pods/0857e092-05eb-4415-bd8b-c133565af044/volumes" Jan 30 17:10:09 crc kubenswrapper[4766]: I0130 17:10:09.045764 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:10:09 crc kubenswrapper[4766]: I0130 17:10:09.046327 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.045724 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.046253 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.534866 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535262 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-utilities" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535280 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-utilities" Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535297 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-content" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535306 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="extract-content" Jan 30 17:10:39 crc kubenswrapper[4766]: E0130 17:10:39.535321 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535329 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.535516 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0857e092-05eb-4415-bd8b-c133565af044" containerName="registry-server" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.536444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.557331 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648726 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648772 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.648806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750759 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.750794 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.751341 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.751492 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.771751 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"redhat-operators-7tb9c\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:39 crc kubenswrapper[4766]: I0130 17:10:39.852320 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.288442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.528651 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.530329 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.541442 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667612 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.667871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.769857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.769957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.770516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.793732 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"community-operators-zngnx\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:40 crc kubenswrapper[4766]: I0130 17:10:40.853476 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.016964 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" exitCode=0 Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.017219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6"} Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.017275 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"f0333f596b589563f978aad186da16aacd88d7acd56905ba8557c3d26b41ec37"} Jan 30 17:10:41 crc kubenswrapper[4766]: I0130 17:10:41.132372 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:41 crc kubenswrapper[4766]: W0130 17:10:41.135626 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode507d583_4c30_4a78_902f_9b53865469c9.slice/crio-4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f WatchSource:0}: Error finding container 4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f: Status 404 returned error can't find the container with id 4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.037007 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.043566 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" exitCode=0 Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.053148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e"} Jan 30 17:10:42 crc kubenswrapper[4766]: I0130 17:10:42.053210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerStarted","Data":"4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f"} Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.057001 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" exitCode=0 Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.057049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de"} Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.059396 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" exitCode=0 Jan 30 17:10:43 crc kubenswrapper[4766]: I0130 17:10:43.059437 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.067475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerStarted","Data":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.070983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerStarted","Data":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.094904 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7tb9c" podStartSLOduration=2.674994035 podStartE2EDuration="5.094888346s" podCreationTimestamp="2026-01-30 17:10:39 +0000 UTC" firstStartedPulling="2026-01-30 17:10:41.019971247 +0000 UTC m=+2895.657928593" lastFinishedPulling="2026-01-30 17:10:43.439865558 +0000 UTC m=+2898.077822904" observedRunningTime="2026-01-30 17:10:44.094127185 +0000 UTC m=+2898.732084531" watchObservedRunningTime="2026-01-30 17:10:44.094888346 +0000 UTC m=+2898.732845692" Jan 30 17:10:44 crc kubenswrapper[4766]: I0130 17:10:44.123211 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zngnx" podStartSLOduration=2.705769101 podStartE2EDuration="4.123190806s" podCreationTimestamp="2026-01-30 17:10:40 +0000 UTC" firstStartedPulling="2026-01-30 17:10:42.04590421 +0000 UTC m=+2896.683861556" lastFinishedPulling="2026-01-30 17:10:43.463325915 +0000 UTC m=+2898.101283261" observedRunningTime="2026-01-30 17:10:44.115569817 +0000 UTC m=+2898.753527183" watchObservedRunningTime="2026-01-30 17:10:44.123190806 +0000 UTC m=+2898.761148152" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.853304 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.853946 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:49 crc kubenswrapper[4766]: I0130 17:10:49.907450 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.143134 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.181123 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.853963 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.854037 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:50 crc kubenswrapper[4766]: I0130 17:10:50.913728 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:51 crc kubenswrapper[4766]: I0130 17:10:51.157113 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.125697 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7tb9c" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" containerID="cri-o://9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" gracePeriod=2 Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.554970 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.565938 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.753565 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.753936 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.754036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") pod \"a230b4cf-8e5f-4073-9703-f9b0bb153676\" (UID: \"a230b4cf-8e5f-4073-9703-f9b0bb153676\") " Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.755820 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities" (OuterVolumeSpecName: "utilities") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.760022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh" (OuterVolumeSpecName: "kube-api-access-fprdh") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "kube-api-access-fprdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.855293 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:52 crc kubenswrapper[4766]: I0130 17:10:52.855336 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fprdh\" (UniqueName: \"kubernetes.io/projected/a230b4cf-8e5f-4073-9703-f9b0bb153676-kube-api-access-fprdh\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.118865 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a230b4cf-8e5f-4073-9703-f9b0bb153676" (UID: "a230b4cf-8e5f-4073-9703-f9b0bb153676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134704 4766 generic.go:334] "Generic (PLEG): container finished" podID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" exitCode=0 Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134766 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7tb9c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134789 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134829 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7tb9c" event={"ID":"a230b4cf-8e5f-4073-9703-f9b0bb153676","Type":"ContainerDied","Data":"f0333f596b589563f978aad186da16aacd88d7acd56905ba8557c3d26b41ec37"} Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.134849 4766 scope.go:117] "RemoveContainer" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.135269 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zngnx" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" containerID="cri-o://3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" gracePeriod=2 Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.161142 4766 scope.go:117] "RemoveContainer" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.167817 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a230b4cf-8e5f-4073-9703-f9b0bb153676-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.175553 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.182298 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7tb9c"] Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.199617 4766 scope.go:117] "RemoveContainer" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.341519 4766 scope.go:117] "RemoveContainer" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.342608 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": container with ID starting with 9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5 not found: ID does not exist" containerID="9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.342657 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5"} err="failed to get container status \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": rpc error: code = NotFound desc = could not find container \"9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5\": container with ID starting with 9bcccebb2367c627ac434306c10f28f87c0f4690fdbec71abdfd31231de0b8e5 not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.342690 4766 scope.go:117] "RemoveContainer" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.343511 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": container with ID starting with 9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c not found: ID does not exist" containerID="9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.343540 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c"} err="failed to get container status \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": rpc error: code = NotFound desc = could not find container \"9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c\": container with ID starting with 9da17ae8ec9d9e5e04ce2d515bc09cabd9526e1f863e580f19b0f4a241a7f58c not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.343562 4766 scope.go:117] "RemoveContainer" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: E0130 17:10:53.344772 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": container with ID starting with 202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6 not found: ID does not exist" containerID="202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.344816 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6"} err="failed to get container status \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": rpc error: code = NotFound desc = could not find container \"202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6\": container with ID starting with 202f28b02298f21c33c1f35e3a5e23a0cb214ea7a644dc3e562c70d5a7f17be6 not found: ID does not exist" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.553493 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573467 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.573492 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") pod \"e507d583-4c30-4a78-902f-9b53865469c9\" (UID: \"e507d583-4c30-4a78-902f-9b53865469c9\") " Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.574707 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities" (OuterVolumeSpecName: "utilities") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.578267 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt" (OuterVolumeSpecName: "kube-api-access-cfswt") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "kube-api-access-cfswt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.629337 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e507d583-4c30-4a78-902f-9b53865469c9" (UID: "e507d583-4c30-4a78-902f-9b53865469c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675503 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfswt\" (UniqueName: \"kubernetes.io/projected/e507d583-4c30-4a78-902f-9b53865469c9-kube-api-access-cfswt\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675801 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:53 crc kubenswrapper[4766]: I0130 17:10:53.675880 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e507d583-4c30-4a78-902f-9b53865469c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.048127 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" path="/var/lib/kubelet/pods/a230b4cf-8e5f-4073-9703-f9b0bb153676/volumes" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143535 4766 generic.go:334] "Generic (PLEG): container finished" podID="e507d583-4c30-4a78-902f-9b53865469c9" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" exitCode=0 Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143602 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.143656 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zngnx" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.144478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zngnx" event={"ID":"e507d583-4c30-4a78-902f-9b53865469c9","Type":"ContainerDied","Data":"4dec609aa4885b78f47eb0e4e6f2d968e7a0eb19591ef99af791ed738dbfcf3f"} Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.144508 4766 scope.go:117] "RemoveContainer" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.175596 4766 scope.go:117] "RemoveContainer" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.183554 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.191537 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zngnx"] Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.198103 4766 scope.go:117] "RemoveContainer" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.213890 4766 scope.go:117] "RemoveContainer" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.214286 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": container with ID starting with 3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9 not found: ID does not exist" containerID="3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214320 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9"} err="failed to get container status \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": rpc error: code = NotFound desc = could not find container \"3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9\": container with ID starting with 3da8ea54efb2ffb570d1577df49998152fd78da19ab9060b5cce0725281985d9 not found: ID does not exist" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214341 4766 scope.go:117] "RemoveContainer" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.214578 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": container with ID starting with 3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de not found: ID does not exist" containerID="3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214610 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de"} err="failed to get container status \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": rpc error: code = NotFound desc = could not find container \"3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de\": container with ID starting with 3dd6a530ff7c14d38801ca3930f0c13803a0a7d893eebee3e03f62322b4102de not found: ID does not exist" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.214626 4766 scope.go:117] "RemoveContainer" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: E0130 17:10:54.215006 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": container with ID starting with d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e not found: ID does not exist" containerID="d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e" Jan 30 17:10:54 crc kubenswrapper[4766]: I0130 17:10:54.215029 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e"} err="failed to get container status \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": rpc error: code = NotFound desc = could not find container \"d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e\": container with ID starting with d1641c04fdaf2c4bb1b7a1a71f2402b193ed0cc9257453d33d9e9bea7cd9023e not found: ID does not exist" Jan 30 17:10:56 crc kubenswrapper[4766]: I0130 17:10:56.049423 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e507d583-4c30-4a78-902f-9b53865469c9" path="/var/lib/kubelet/pods/e507d583-4c30-4a78-902f-9b53865469c9/volumes" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.045542 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.046649 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.046744 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.047780 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:11:09 crc kubenswrapper[4766]: I0130 17:11:09.047861 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" gracePeriod=600 Jan 30 17:11:09 crc kubenswrapper[4766]: E0130 17:11:09.717620 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.268961 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" exitCode=0 Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.269050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9"} Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.270065 4766 scope.go:117] "RemoveContainer" containerID="5d3becd45505d4de521190d32436097d94d4667af4d51e364dad238f886a491a" Jan 30 17:11:10 crc kubenswrapper[4766]: I0130 17:11:10.270628 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:10 crc kubenswrapper[4766]: E0130 17:11:10.270881 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:25 crc kubenswrapper[4766]: I0130 17:11:25.039070 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:25 crc kubenswrapper[4766]: E0130 17:11:25.039887 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:36 crc kubenswrapper[4766]: I0130 17:11:36.039115 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:36 crc kubenswrapper[4766]: E0130 17:11:36.040034 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:11:50 crc kubenswrapper[4766]: I0130 17:11:50.040034 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:11:50 crc kubenswrapper[4766]: E0130 17:11:50.040855 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:02 crc kubenswrapper[4766]: I0130 17:12:02.040022 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:02 crc kubenswrapper[4766]: E0130 17:12:02.040990 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:14 crc kubenswrapper[4766]: I0130 17:12:14.040059 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:14 crc kubenswrapper[4766]: E0130 17:12:14.041704 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:27 crc kubenswrapper[4766]: I0130 17:12:27.039845 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:27 crc kubenswrapper[4766]: E0130 17:12:27.041275 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:39 crc kubenswrapper[4766]: I0130 17:12:39.040053 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:39 crc kubenswrapper[4766]: E0130 17:12:39.040926 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:12:52 crc kubenswrapper[4766]: I0130 17:12:52.040009 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:12:52 crc kubenswrapper[4766]: E0130 17:12:52.040606 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:06 crc kubenswrapper[4766]: I0130 17:13:06.039828 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:06 crc kubenswrapper[4766]: E0130 17:13:06.040725 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:19 crc kubenswrapper[4766]: I0130 17:13:19.040297 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:19 crc kubenswrapper[4766]: E0130 17:13:19.041865 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:31 crc kubenswrapper[4766]: I0130 17:13:31.039440 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:31 crc kubenswrapper[4766]: E0130 17:13:31.040288 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:42 crc kubenswrapper[4766]: I0130 17:13:42.039417 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:42 crc kubenswrapper[4766]: E0130 17:13:42.040916 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:13:53 crc kubenswrapper[4766]: I0130 17:13:53.039554 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:13:53 crc kubenswrapper[4766]: E0130 17:13:53.040387 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:04 crc kubenswrapper[4766]: I0130 17:14:04.040619 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:04 crc kubenswrapper[4766]: E0130 17:14:04.041853 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:15 crc kubenswrapper[4766]: I0130 17:14:15.039122 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:15 crc kubenswrapper[4766]: E0130 17:14:15.039836 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:30 crc kubenswrapper[4766]: I0130 17:14:30.039532 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:30 crc kubenswrapper[4766]: E0130 17:14:30.040454 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:43 crc kubenswrapper[4766]: I0130 17:14:43.039313 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:43 crc kubenswrapper[4766]: E0130 17:14:43.041740 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:14:54 crc kubenswrapper[4766]: I0130 17:14:54.039305 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:14:54 crc kubenswrapper[4766]: E0130 17:14:54.040245 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.157672 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.159715 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.159830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.159915 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.159997 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160083 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160158 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160250 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160332 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160409 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160491 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4766]: E0130 17:15:00.160595 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.160699 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.161050 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a230b4cf-8e5f-4073-9703-f9b0bb153676" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.161160 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e507d583-4c30-4a78-902f-9b53865469c9" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.162033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.164505 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.164685 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.167871 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294649 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294781 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.294836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396023 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.396331 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.397640 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.402478 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.412241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"collect-profiles-29496555-l7zjm\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.485991 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:00 crc kubenswrapper[4766]: I0130 17:15:00.932637 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.075450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerStarted","Data":"e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b"} Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.075884 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerStarted","Data":"7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58"} Jan 30 17:15:01 crc kubenswrapper[4766]: I0130 17:15:01.091771 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" podStartSLOduration=1.091749869 podStartE2EDuration="1.091749869s" podCreationTimestamp="2026-01-30 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:01.089416776 +0000 UTC m=+3155.727374122" watchObservedRunningTime="2026-01-30 17:15:01.091749869 +0000 UTC m=+3155.729707215" Jan 30 17:15:02 crc kubenswrapper[4766]: I0130 17:15:02.082397 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerID="e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b" exitCode=0 Jan 30 17:15:02 crc kubenswrapper[4766]: I0130 17:15:02.082452 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerDied","Data":"e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b"} Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.317130 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442249 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442486 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.442548 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") pod \"20c37317-bc31-4749-bf2a-000f3786ebdb\" (UID: \"20c37317-bc31-4749-bf2a-000f3786ebdb\") " Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.443569 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.447868 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.447868 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758" (OuterVolumeSpecName: "kube-api-access-94758") pod "20c37317-bc31-4749-bf2a-000f3786ebdb" (UID: "20c37317-bc31-4749-bf2a-000f3786ebdb"). InnerVolumeSpecName "kube-api-access-94758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544234 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c37317-bc31-4749-bf2a-000f3786ebdb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544270 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c37317-bc31-4749-bf2a-000f3786ebdb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4766]: I0130 17:15:03.544283 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94758\" (UniqueName: \"kubernetes.io/projected/20c37317-bc31-4749-bf2a-000f3786ebdb-kube-api-access-94758\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" event={"ID":"20c37317-bc31-4749-bf2a-000f3786ebdb","Type":"ContainerDied","Data":"7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58"} Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097858 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a35e248c8397a411954c6581821563040299233281df19d033970d285a3de58" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.097878 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm" Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.393459 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 17:15:04 crc kubenswrapper[4766]: I0130 17:15:04.398309 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-glrms"] Jan 30 17:15:06 crc kubenswrapper[4766]: I0130 17:15:06.049480 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabaaf93-f51e-4847-b39a-8ecccc43f8d4" path="/var/lib/kubelet/pods/aabaaf93-f51e-4847-b39a-8ecccc43f8d4/volumes" Jan 30 17:15:09 crc kubenswrapper[4766]: I0130 17:15:09.039767 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:09 crc kubenswrapper[4766]: E0130 17:15:09.040067 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:22 crc kubenswrapper[4766]: I0130 17:15:22.039387 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:22 crc kubenswrapper[4766]: E0130 17:15:22.040235 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:30 crc kubenswrapper[4766]: I0130 17:15:30.514511 4766 scope.go:117] "RemoveContainer" containerID="add3babd5c979004ca5cf98ed2207ebf2c3f7f606e68f1380f3bcb0131882a0e" Jan 30 17:15:33 crc kubenswrapper[4766]: I0130 17:15:33.039783 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:33 crc kubenswrapper[4766]: E0130 17:15:33.040676 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:44 crc kubenswrapper[4766]: I0130 17:15:44.040079 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:44 crc kubenswrapper[4766]: E0130 17:15:44.040948 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:15:56 crc kubenswrapper[4766]: I0130 17:15:56.042899 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:15:56 crc kubenswrapper[4766]: E0130 17:15:56.043844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:16:11 crc kubenswrapper[4766]: I0130 17:16:11.040450 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:16:11 crc kubenswrapper[4766]: I0130 17:16:11.584929 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.738482 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:29 crc kubenswrapper[4766]: E0130 17:17:29.740072 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.740091 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.740313 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" containerName="collect-profiles" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.741635 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.745787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858585 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.858683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960758 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.960843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.961333 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-catalog-content\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.961458 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d7c1afe-4961-4d01-9513-635a558d6eba-utilities\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:29 crc kubenswrapper[4766]: I0130 17:17:29.986388 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf48k\" (UniqueName: \"kubernetes.io/projected/8d7c1afe-4961-4d01-9513-635a558d6eba-kube-api-access-vf48k\") pod \"certified-operators-xbqw6\" (UID: \"8d7c1afe-4961-4d01-9513-635a558d6eba\") " pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:30 crc kubenswrapper[4766]: I0130 17:17:30.068806 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:30 crc kubenswrapper[4766]: I0130 17:17:30.615511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146414 4766 generic.go:334] "Generic (PLEG): container finished" podID="8d7c1afe-4961-4d01-9513-635a558d6eba" containerID="586acc78e1d93b943a55480254d09794912e7f6511e2aa6c95cd772d5a4e71e0" exitCode=0 Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146486 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerDied","Data":"586acc78e1d93b943a55480254d09794912e7f6511e2aa6c95cd772d5a4e71e0"} Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.146790 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"c4a1ca07aec1c81f773e9b6ff12e10f2e9b2b05c89b31b465ae9387f71a0c82a"} Jan 30 17:17:31 crc kubenswrapper[4766]: I0130 17:17:31.148667 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:17:36 crc kubenswrapper[4766]: I0130 17:17:36.189000 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38"} Jan 30 17:17:37 crc kubenswrapper[4766]: I0130 17:17:37.200133 4766 generic.go:334] "Generic (PLEG): container finished" podID="8d7c1afe-4961-4d01-9513-635a558d6eba" containerID="f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38" exitCode=0 Jan 30 17:17:37 crc kubenswrapper[4766]: I0130 17:17:37.200254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerDied","Data":"f00d8420f534ff4322c45b1fceb6e89fa9b4fe29e7559cd153daf96d32f4fc38"} Jan 30 17:17:38 crc kubenswrapper[4766]: I0130 17:17:38.214265 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbqw6" event={"ID":"8d7c1afe-4961-4d01-9513-635a558d6eba","Type":"ContainerStarted","Data":"836e59fbff2828406622783e6759c8e36d18b33bcebcb00b0a79100a58039c34"} Jan 30 17:17:38 crc kubenswrapper[4766]: I0130 17:17:38.239050 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbqw6" podStartSLOduration=2.805500164 podStartE2EDuration="9.239023414s" podCreationTimestamp="2026-01-30 17:17:29 +0000 UTC" firstStartedPulling="2026-01-30 17:17:31.148366956 +0000 UTC m=+3305.786324302" lastFinishedPulling="2026-01-30 17:17:37.581890196 +0000 UTC m=+3312.219847552" observedRunningTime="2026-01-30 17:17:38.2339097 +0000 UTC m=+3312.871867056" watchObservedRunningTime="2026-01-30 17:17:38.239023414 +0000 UTC m=+3312.876980780" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.069394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.070394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:40 crc kubenswrapper[4766]: I0130 17:17:40.114632 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.111475 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbqw6" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.177895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbqw6"] Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.220416 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.220964 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqx4x" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" containerID="cri-o://80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" gracePeriod=2 Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.690604 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.882383 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.882843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities" (OuterVolumeSpecName: "utilities") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.883016 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.884077 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") pod \"748d2b4a-b71d-4ecb-9df9-166be9b20302\" (UID: \"748d2b4a-b71d-4ecb-9df9-166be9b20302\") " Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.884409 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.889363 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj" (OuterVolumeSpecName: "kube-api-access-mjxkj") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "kube-api-access-mjxkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.934143 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748d2b4a-b71d-4ecb-9df9-166be9b20302" (UID: "748d2b4a-b71d-4ecb-9df9-166be9b20302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.985190 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748d2b4a-b71d-4ecb-9df9-166be9b20302-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:50 crc kubenswrapper[4766]: I0130 17:17:50.985224 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjxkj\" (UniqueName: \"kubernetes.io/projected/748d2b4a-b71d-4ecb-9df9-166be9b20302-kube-api-access-mjxkj\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301593 4766 generic.go:334] "Generic (PLEG): container finished" podID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" exitCode=0 Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqx4x" event={"ID":"748d2b4a-b71d-4ecb-9df9-166be9b20302","Type":"ContainerDied","Data":"4e2e822728d72b043828d2c376fae8de09ee8b30107e67f666204b30101944fd"} Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301687 4766 scope.go:117] "RemoveContainer" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.301830 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqx4x" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.331699 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.335451 4766 scope.go:117] "RemoveContainer" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.339153 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqx4x"] Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.360015 4766 scope.go:117] "RemoveContainer" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.394766 4766 scope.go:117] "RemoveContainer" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.395469 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": container with ID starting with 80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78 not found: ID does not exist" containerID="80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395509 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78"} err="failed to get container status \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": rpc error: code = NotFound desc = could not find container \"80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78\": container with ID starting with 80f82cf583547c818df4ad2c3dd4a04653845e781c7f074dd998f95869f66a78 not found: ID does not exist" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395577 4766 scope.go:117] "RemoveContainer" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.395917 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": container with ID starting with e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571 not found: ID does not exist" containerID="e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395972 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571"} err="failed to get container status \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": rpc error: code = NotFound desc = could not find container \"e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571\": container with ID starting with e9061df418f3f64de90a81135deae4b851739a3e9086514cdf7058448889f571 not found: ID does not exist" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.395999 4766 scope.go:117] "RemoveContainer" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: E0130 17:17:51.397422 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": container with ID starting with ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b not found: ID does not exist" containerID="ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b" Jan 30 17:17:51 crc kubenswrapper[4766]: I0130 17:17:51.397473 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b"} err="failed to get container status \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": rpc error: code = NotFound desc = could not find container \"ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b\": container with ID starting with ba435ab631f7ff92ef8bba4d34cfafad5419427cc9857694848d5fda4018e46b not found: ID does not exist" Jan 30 17:17:52 crc kubenswrapper[4766]: I0130 17:17:52.048550 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" path="/var/lib/kubelet/pods/748d2b4a-b71d-4ecb-9df9-166be9b20302/volumes" Jan 30 17:18:39 crc kubenswrapper[4766]: I0130 17:18:39.045360 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:18:39 crc kubenswrapper[4766]: I0130 17:18:39.045949 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:09 crc kubenswrapper[4766]: I0130 17:19:09.045397 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:19:09 crc kubenswrapper[4766]: I0130 17:19:09.046051 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.045435 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.046156 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.046235 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.077664 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:19:39 crc kubenswrapper[4766]: I0130 17:19:39.077756 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" gracePeriod=600 Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.086795 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" exitCode=0 Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.086922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504"} Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.087623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} Jan 30 17:19:40 crc kubenswrapper[4766]: I0130 17:19:40.087726 4766 scope.go:117] "RemoveContainer" containerID="33bfd46087a94354017580cc322df23894b074e070192c17b1c605a92e00a8b9" Jan 30 17:21:39 crc kubenswrapper[4766]: I0130 17:21:39.046042 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:21:39 crc kubenswrapper[4766]: I0130 17:21:39.047994 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:09 crc kubenswrapper[4766]: I0130 17:22:09.045644 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:22:09 crc kubenswrapper[4766]: I0130 17:22:09.046247 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.045987 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.046766 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.046819 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.047594 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.047672 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" gracePeriod=600 Jan 30 17:22:39 crc kubenswrapper[4766]: E0130 17:22:39.171513 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294473 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" exitCode=0 Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294531 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9"} Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.294576 4766 scope.go:117] "RemoveContainer" containerID="d1a44a725e65357e16f05f690c1dcafe8159120a80d628d21f45739a01c94504" Jan 30 17:22:39 crc kubenswrapper[4766]: I0130 17:22:39.295344 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:22:39 crc kubenswrapper[4766]: E0130 17:22:39.295832 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:22:54 crc kubenswrapper[4766]: I0130 17:22:54.039818 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:22:54 crc kubenswrapper[4766]: E0130 17:22:54.041326 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:09 crc kubenswrapper[4766]: I0130 17:23:09.039720 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:09 crc kubenswrapper[4766]: E0130 17:23:09.040582 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:22 crc kubenswrapper[4766]: I0130 17:23:22.040014 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:22 crc kubenswrapper[4766]: E0130 17:23:22.040895 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:36 crc kubenswrapper[4766]: I0130 17:23:36.043878 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:36 crc kubenswrapper[4766]: E0130 17:23:36.044696 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:47 crc kubenswrapper[4766]: I0130 17:23:47.038782 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:47 crc kubenswrapper[4766]: E0130 17:23:47.039589 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:23:58 crc kubenswrapper[4766]: I0130 17:23:58.038976 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:23:58 crc kubenswrapper[4766]: E0130 17:23:58.039869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:13 crc kubenswrapper[4766]: I0130 17:24:13.040352 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:13 crc kubenswrapper[4766]: E0130 17:24:13.041246 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:28 crc kubenswrapper[4766]: I0130 17:24:28.039946 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:28 crc kubenswrapper[4766]: E0130 17:24:28.041145 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.697598 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699349 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-utilities" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699390 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-utilities" Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699432 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: E0130 17:24:40.699442 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-content" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699448 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="extract-content" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.699571 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="748d2b4a-b71d-4ecb-9df9-166be9b20302" containerName="registry-server" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.700615 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.712929 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.879942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.880075 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.880146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981727 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.981762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.982286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:40 crc kubenswrapper[4766]: I0130 17:24:40.982562 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.007371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"redhat-operators-5hvxv\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.066884 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:41 crc kubenswrapper[4766]: I0130 17:24:41.487735 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.039465 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:42 crc kubenswrapper[4766]: E0130 17:24:42.040022 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307764 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" exitCode=0 Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d"} Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.307853 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerStarted","Data":"a407b6681c75c46865be92f23c418aad527a3d363d1077ce91e1a166879a60a7"} Jan 30 17:24:42 crc kubenswrapper[4766]: I0130 17:24:42.309719 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.895513 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.897950 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:43 crc kubenswrapper[4766]: I0130 17:24:43.913225 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033719 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033804 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.033868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135093 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135294 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135833 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.135891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.163368 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"community-operators-cl9cr\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.222156 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.324625 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" exitCode=0 Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.324723 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50"} Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.525960 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.679736 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.681261 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.691308 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.846984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.847388 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.847413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949293 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949474 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949885 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.949913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:44 crc kubenswrapper[4766]: I0130 17:24:44.974742 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"redhat-marketplace-mjxb9\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.006155 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:45 crc kubenswrapper[4766]: W0130 17:24:45.257748 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a4a051d_4bb1_46b4_9e9c_cc50b06e823f.slice/crio-579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c WatchSource:0}: Error finding container 579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c: Status 404 returned error can't find the container with id 579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.257948 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.332103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.335028 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerStarted","Data":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337611 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36" exitCode=0 Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.337710 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"47eb60c93a789d09e469d6d91f744380bb36ac2e3fc7ca1dbbff8f9e7af1d3f7"} Jan 30 17:24:45 crc kubenswrapper[4766]: I0130 17:24:45.362704 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5hvxv" podStartSLOduration=2.7972187269999997 podStartE2EDuration="5.36268612s" podCreationTimestamp="2026-01-30 17:24:40 +0000 UTC" firstStartedPulling="2026-01-30 17:24:42.309389839 +0000 UTC m=+3736.947347185" lastFinishedPulling="2026-01-30 17:24:44.874857232 +0000 UTC m=+3739.512814578" observedRunningTime="2026-01-30 17:24:45.354356672 +0000 UTC m=+3739.992314028" watchObservedRunningTime="2026-01-30 17:24:45.36268612 +0000 UTC m=+3740.000643466" Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.345932 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" exitCode=0 Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.346002 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf"} Jan 30 17:24:46 crc kubenswrapper[4766]: I0130 17:24:46.350915 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368092 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35" exitCode=0 Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368549 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.368585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerStarted","Data":"abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.373359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} Jan 30 17:24:47 crc kubenswrapper[4766]: I0130 17:24:47.393022 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cl9cr" podStartSLOduration=2.982633186 podStartE2EDuration="4.392996648s" podCreationTimestamp="2026-01-30 17:24:43 +0000 UTC" firstStartedPulling="2026-01-30 17:24:45.339993029 +0000 UTC m=+3739.977950375" lastFinishedPulling="2026-01-30 17:24:46.750356491 +0000 UTC m=+3741.388313837" observedRunningTime="2026-01-30 17:24:47.390848289 +0000 UTC m=+3742.028805645" watchObservedRunningTime="2026-01-30 17:24:47.392996648 +0000 UTC m=+3742.030953994" Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.391961 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" exitCode=0 Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.392158 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.392264 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerStarted","Data":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} Jan 30 17:24:48 crc kubenswrapper[4766]: I0130 17:24:48.416963 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjxb9" podStartSLOduration=2.946285696 podStartE2EDuration="4.416939288s" podCreationTimestamp="2026-01-30 17:24:44 +0000 UTC" firstStartedPulling="2026-01-30 17:24:46.348360522 +0000 UTC m=+3740.986317868" lastFinishedPulling="2026-01-30 17:24:47.819014114 +0000 UTC m=+3742.456971460" observedRunningTime="2026-01-30 17:24:48.408461806 +0000 UTC m=+3743.046419152" watchObservedRunningTime="2026-01-30 17:24:48.416939288 +0000 UTC m=+3743.054896634" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.067614 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.068220 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.108088 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:51 crc kubenswrapper[4766]: I0130 17:24:51.450957 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.074034 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.423451 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5hvxv" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" containerID="cri-o://653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" gracePeriod=2 Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.800369 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.891033 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892007 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities" (OuterVolumeSpecName: "utilities") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.891174 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892349 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") pod \"4569e00a-4dea-4144-999c-4ac356b760d8\" (UID: \"4569e00a-4dea-4144-999c-4ac356b760d8\") " Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.892621 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.897971 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5" (OuterVolumeSpecName: "kube-api-access-7n9d5") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "kube-api-access-7n9d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:53 crc kubenswrapper[4766]: I0130 17:24:53.994128 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n9d5\" (UniqueName: \"kubernetes.io/projected/4569e00a-4dea-4144-999c-4ac356b760d8-kube-api-access-7n9d5\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.223038 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.224401 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.264111 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432231 4766 generic.go:334] "Generic (PLEG): container finished" podID="4569e00a-4dea-4144-999c-4ac356b760d8" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" exitCode=0 Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432270 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5hvxv" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5hvxv" event={"ID":"4569e00a-4dea-4144-999c-4ac356b760d8","Type":"ContainerDied","Data":"a407b6681c75c46865be92f23c418aad527a3d363d1077ce91e1a166879a60a7"} Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.432428 4766 scope.go:117] "RemoveContainer" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.452027 4766 scope.go:117] "RemoveContainer" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.471051 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.481432 4766 scope.go:117] "RemoveContainer" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504195 4766 scope.go:117] "RemoveContainer" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.504704 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": container with ID starting with 653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de not found: ID does not exist" containerID="653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504751 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de"} err="failed to get container status \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": rpc error: code = NotFound desc = could not find container \"653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de\": container with ID starting with 653a8e6831133d8b2a6f2ee575acaa40923e3047d8768c174fd6bd89a70f21de not found: ID does not exist" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.504785 4766 scope.go:117] "RemoveContainer" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.505332 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": container with ID starting with 9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50 not found: ID does not exist" containerID="9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505368 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50"} err="failed to get container status \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": rpc error: code = NotFound desc = could not find container \"9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50\": container with ID starting with 9d1565a5d3ce55ff82e51943d69e372b3e812ede87bdb6a01d0330bec19acd50 not found: ID does not exist" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505395 4766 scope.go:117] "RemoveContainer" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: E0130 17:24:54.505732 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": container with ID starting with 18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d not found: ID does not exist" containerID="18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d" Jan 30 17:24:54 crc kubenswrapper[4766]: I0130 17:24:54.505786 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d"} err="failed to get container status \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": rpc error: code = NotFound desc = could not find container \"18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d\": container with ID starting with 18bf458792fc5980c1b167dfe022c147e91bebcb4d41a36bb7c4510aa600c92d not found: ID does not exist" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.006498 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.007143 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.039542 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:24:55 crc kubenswrapper[4766]: E0130 17:24:55.040003 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.049240 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.453907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4569e00a-4dea-4144-999c-4ac356b760d8" (UID: "4569e00a-4dea-4144-999c-4ac356b760d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.480896 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.517039 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4569e00a-4dea-4144-999c-4ac356b760d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.668145 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:55 crc kubenswrapper[4766]: I0130 17:24:55.674838 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5hvxv"] Jan 30 17:24:56 crc kubenswrapper[4766]: I0130 17:24:56.048276 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" path="/var/lib/kubelet/pods/4569e00a-4dea-4144-999c-4ac356b760d8/volumes" Jan 30 17:24:56 crc kubenswrapper[4766]: I0130 17:24:56.669307 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:57 crc kubenswrapper[4766]: I0130 17:24:57.451761 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cl9cr" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" containerID="cri-o://abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" gracePeriod=2 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.070692 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470410 4766 generic.go:334] "Generic (PLEG): container finished" podID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerID="abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" exitCode=0 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b"} Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.470654 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjxb9" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" containerID="cri-o://dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" gracePeriod=2 Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.634530 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.760916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.761057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.761143 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") pod \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\" (UID: \"62fdc4f9-d560-48af-8de6-fecfb7e24d8b\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.762057 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities" (OuterVolumeSpecName: "utilities") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.768296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq" (OuterVolumeSpecName: "kube-api-access-5d9zq") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "kube-api-access-5d9zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.813984 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62fdc4f9-d560-48af-8de6-fecfb7e24d8b" (UID: "62fdc4f9-d560-48af-8de6-fecfb7e24d8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.820810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862783 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862817 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9zq\" (UniqueName: \"kubernetes.io/projected/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-kube-api-access-5d9zq\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.862829 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fdc4f9-d560-48af-8de6-fecfb7e24d8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.963676 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.965399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.965439 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") pod \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\" (UID: \"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f\") " Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.966226 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities" (OuterVolumeSpecName: "utilities") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.968306 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5" (OuterVolumeSpecName: "kube-api-access-szxw5") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "kube-api-access-szxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:58 crc kubenswrapper[4766]: I0130 17:24:58.988045 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" (UID: "2a4a051d-4bb1-46b4-9e9c-cc50b06e823f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066646 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066695 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szxw5\" (UniqueName: \"kubernetes.io/projected/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-kube-api-access-szxw5\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.066708 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479035 4766 generic.go:334] "Generic (PLEG): container finished" podID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" exitCode=0 Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479117 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjxb9" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479136 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479228 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjxb9" event={"ID":"2a4a051d-4bb1-46b4-9e9c-cc50b06e823f","Type":"ContainerDied","Data":"579bc68ebea5e87f4392da65b5ea701114e26d213f6bb3adf0d1d3670c59295c"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.479249 4766 scope.go:117] "RemoveContainer" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.481726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cl9cr" event={"ID":"62fdc4f9-d560-48af-8de6-fecfb7e24d8b","Type":"ContainerDied","Data":"47eb60c93a789d09e469d6d91f744380bb36ac2e3fc7ca1dbbff8f9e7af1d3f7"} Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.481844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cl9cr" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.495808 4766 scope.go:117] "RemoveContainer" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.511908 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.517750 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjxb9"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.526627 4766 scope.go:117] "RemoveContainer" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.538619 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.548165 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cl9cr"] Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.548911 4766 scope.go:117] "RemoveContainer" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.549753 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": container with ID starting with dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df not found: ID does not exist" containerID="dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.549783 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df"} err="failed to get container status \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": rpc error: code = NotFound desc = could not find container \"dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df\": container with ID starting with dc974cf058ca436183d41a69f1abca4b087bed8847f48187976ccaf3626e59df not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.549806 4766 scope.go:117] "RemoveContainer" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.550116 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": container with ID starting with 4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016 not found: ID does not exist" containerID="4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550164 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016"} err="failed to get container status \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": rpc error: code = NotFound desc = could not find container \"4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016\": container with ID starting with 4ac13993bab44a7508e345b20e1a3c2404957d32156f8b839370b9931ace0016 not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550212 4766 scope.go:117] "RemoveContainer" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: E0130 17:24:59.550491 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": container with ID starting with 71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf not found: ID does not exist" containerID="71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550561 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf"} err="failed to get container status \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": rpc error: code = NotFound desc = could not find container \"71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf\": container with ID starting with 71ba56461c5026d76621ad26c74f0bce91657447f3349861025a1c45ca4ffedf not found: ID does not exist" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.550577 4766 scope.go:117] "RemoveContainer" containerID="abcde2cbdc9df1676025f822d42fc361cb317312aed1ffad87e6e425537f4c6b" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.567137 4766 scope.go:117] "RemoveContainer" containerID="d10b116a5966ef4d064980eac6f7b00e3fc1d563e3ac448eabf568ca49f9cb35" Jan 30 17:24:59 crc kubenswrapper[4766]: I0130 17:24:59.585722 4766 scope.go:117] "RemoveContainer" containerID="5930c12208a27936cf2e1889a6fbd7e0f6c461fb83dc532115569957fdc3bf36" Jan 30 17:25:00 crc kubenswrapper[4766]: I0130 17:25:00.047474 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" path="/var/lib/kubelet/pods/2a4a051d-4bb1-46b4-9e9c-cc50b06e823f/volumes" Jan 30 17:25:00 crc kubenswrapper[4766]: I0130 17:25:00.048463 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" path="/var/lib/kubelet/pods/62fdc4f9-d560-48af-8de6-fecfb7e24d8b/volumes" Jan 30 17:25:06 crc kubenswrapper[4766]: I0130 17:25:06.042789 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:06 crc kubenswrapper[4766]: E0130 17:25:06.043354 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:17 crc kubenswrapper[4766]: I0130 17:25:17.040420 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:17 crc kubenswrapper[4766]: E0130 17:25:17.041115 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:31 crc kubenswrapper[4766]: I0130 17:25:31.039564 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:31 crc kubenswrapper[4766]: E0130 17:25:31.040287 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:43 crc kubenswrapper[4766]: I0130 17:25:43.040024 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:43 crc kubenswrapper[4766]: E0130 17:25:43.040733 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:25:55 crc kubenswrapper[4766]: I0130 17:25:55.039562 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:25:55 crc kubenswrapper[4766]: E0130 17:25:55.040753 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:06 crc kubenswrapper[4766]: I0130 17:26:06.042903 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:06 crc kubenswrapper[4766]: E0130 17:26:06.043728 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:21 crc kubenswrapper[4766]: I0130 17:26:21.045154 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:21 crc kubenswrapper[4766]: E0130 17:26:21.070584 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:33 crc kubenswrapper[4766]: I0130 17:26:33.039805 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:33 crc kubenswrapper[4766]: E0130 17:26:33.041051 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:45 crc kubenswrapper[4766]: I0130 17:26:45.039589 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:45 crc kubenswrapper[4766]: E0130 17:26:45.041707 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:26:56 crc kubenswrapper[4766]: I0130 17:26:56.043431 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:26:56 crc kubenswrapper[4766]: E0130 17:26:56.044285 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:09 crc kubenswrapper[4766]: I0130 17:27:09.040268 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:09 crc kubenswrapper[4766]: E0130 17:27:09.040959 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:23 crc kubenswrapper[4766]: I0130 17:27:23.039947 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:23 crc kubenswrapper[4766]: E0130 17:27:23.040970 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:35 crc kubenswrapper[4766]: I0130 17:27:35.039463 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:35 crc kubenswrapper[4766]: E0130 17:27:35.040222 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:27:48 crc kubenswrapper[4766]: I0130 17:27:48.040993 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:27:48 crc kubenswrapper[4766]: I0130 17:27:48.709815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.819787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820728 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820746 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820765 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820787 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820796 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820807 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820816 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820838 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820846 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-content" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820860 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820868 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820880 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820887 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820898 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820906 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: E0130 17:28:03.820916 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.820924 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="extract-utilities" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821098 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fdc4f9-d560-48af-8de6-fecfb7e24d8b" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821116 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a051d-4bb1-46b4-9e9c-cc50b06e823f" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.821139 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4569e00a-4dea-4144-999c-4ac356b760d8" containerName="registry-server" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.826012 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.830159 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993417 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:03 crc kubenswrapper[4766]: I0130 17:28:03.993527 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095457 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.095898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.096117 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.120298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"certified-operators-pcvwt\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.151953 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:04 crc kubenswrapper[4766]: I0130 17:28:04.616161 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.832841 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7" exitCode=0 Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.832962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7"} Jan 30 17:28:05 crc kubenswrapper[4766]: I0130 17:28:05.833431 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"b029f3569a66e8a8f3f99f4d7fc08ed279dc99ad1ace20029a511e0ade65e8b6"} Jan 30 17:28:06 crc kubenswrapper[4766]: I0130 17:28:06.841224 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b"} Jan 30 17:28:07 crc kubenswrapper[4766]: I0130 17:28:07.848998 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b" exitCode=0 Jan 30 17:28:07 crc kubenswrapper[4766]: I0130 17:28:07.849054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b"} Jan 30 17:28:08 crc kubenswrapper[4766]: I0130 17:28:08.861464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerStarted","Data":"9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab"} Jan 30 17:28:08 crc kubenswrapper[4766]: I0130 17:28:08.887419 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pcvwt" podStartSLOduration=3.228158926 podStartE2EDuration="5.887398876s" podCreationTimestamp="2026-01-30 17:28:03 +0000 UTC" firstStartedPulling="2026-01-30 17:28:05.836219573 +0000 UTC m=+3940.474176919" lastFinishedPulling="2026-01-30 17:28:08.495459513 +0000 UTC m=+3943.133416869" observedRunningTime="2026-01-30 17:28:08.880870398 +0000 UTC m=+3943.518827764" watchObservedRunningTime="2026-01-30 17:28:08.887398876 +0000 UTC m=+3943.525356222" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.152556 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.153524 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:14 crc kubenswrapper[4766]: I0130 17:28:14.198165 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:15 crc kubenswrapper[4766]: I0130 17:28:15.231284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:17 crc kubenswrapper[4766]: I0130 17:28:17.604976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:17 crc kubenswrapper[4766]: I0130 17:28:17.934983 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pcvwt" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" containerID="cri-o://9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" gracePeriod=2 Jan 30 17:28:18 crc kubenswrapper[4766]: I0130 17:28:18.951909 4766 generic.go:334] "Generic (PLEG): container finished" podID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerID="9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" exitCode=0 Jan 30 17:28:18 crc kubenswrapper[4766]: I0130 17:28:18.951983 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab"} Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.027702 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112330 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112380 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.112516 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") pod \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\" (UID: \"6f04beb2-7aa4-4e60-acb5-943ec1b07978\") " Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.113966 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities" (OuterVolumeSpecName: "utilities") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.117859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv" (OuterVolumeSpecName: "kube-api-access-gsnqv") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "kube-api-access-gsnqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.161444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f04beb2-7aa4-4e60-acb5-943ec1b07978" (UID: "6f04beb2-7aa4-4e60-acb5-943ec1b07978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213835 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213869 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f04beb2-7aa4-4e60-acb5-943ec1b07978-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.213885 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsnqv\" (UniqueName: \"kubernetes.io/projected/6f04beb2-7aa4-4e60-acb5-943ec1b07978-kube-api-access-gsnqv\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.962964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcvwt" event={"ID":"6f04beb2-7aa4-4e60-acb5-943ec1b07978","Type":"ContainerDied","Data":"b029f3569a66e8a8f3f99f4d7fc08ed279dc99ad1ace20029a511e0ade65e8b6"} Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.963038 4766 scope.go:117] "RemoveContainer" containerID="9c10a28311d35b39d17a20b29b1674abd6dd1ba0402501fab704f89e9c2768ab" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.963059 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcvwt" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.984175 4766 scope.go:117] "RemoveContainer" containerID="d2c700c9cf815142844159ccaab5b2e609d3972a6caefb05a8b58a4a680f0b9b" Jan 30 17:28:19 crc kubenswrapper[4766]: I0130 17:28:19.994790 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.002525 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pcvwt"] Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.026033 4766 scope.go:117] "RemoveContainer" containerID="ecae258863521551783468826ada29bec790cda4bf21502aec01cbf669c169e7" Jan 30 17:28:20 crc kubenswrapper[4766]: I0130 17:28:20.049807 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" path="/var/lib/kubelet/pods/6f04beb2-7aa4-4e60-acb5-943ec1b07978/volumes" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.169127 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172756 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-content" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172788 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-content" Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172805 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: E0130 17:30:00.172828 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-utilities" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.172836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="extract-utilities" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.173126 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f04beb2-7aa4-4e60-acb5-943ec1b07978" containerName="registry-server" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.174214 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.176659 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.176671 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.177511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268552 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.268673 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.369781 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.370160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.370219 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.371602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.378837 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.390819 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"collect-profiles-29496570-t4zn4\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.498121 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:00 crc kubenswrapper[4766]: I0130 17:30:00.900672 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657561 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerID="2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee" exitCode=0 Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657629 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerDied","Data":"2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee"} Jan 30 17:30:01 crc kubenswrapper[4766]: I0130 17:30:01.657865 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerStarted","Data":"5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5"} Jan 30 17:30:02 crc kubenswrapper[4766]: I0130 17:30:02.903524 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.004961 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.005029 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.005178 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") pod \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\" (UID: \"1d5ff932-157e-49bf-9f1e-b4dc767de05e\") " Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.006106 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.011332 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x" (OuterVolumeSpecName: "kube-api-access-v726x") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "kube-api-access-v726x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.011457 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d5ff932-157e-49bf-9f1e-b4dc767de05e" (UID: "1d5ff932-157e-49bf-9f1e-b4dc767de05e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106533 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5ff932-157e-49bf-9f1e-b4dc767de05e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106869 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v726x\" (UniqueName: \"kubernetes.io/projected/1d5ff932-157e-49bf-9f1e-b4dc767de05e-kube-api-access-v726x\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.106886 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d5ff932-157e-49bf-9f1e-b4dc767de05e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672740 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" event={"ID":"1d5ff932-157e-49bf-9f1e-b4dc767de05e","Type":"ContainerDied","Data":"5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5"} Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672786 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.672793 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cbf3b759bd6bfceded4b9afe5b7971707417f8ffc9ef7455d7bcf67ecfafcd5" Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.979753 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 17:30:03 crc kubenswrapper[4766]: I0130 17:30:03.985972 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-bphwz"] Jan 30 17:30:04 crc kubenswrapper[4766]: I0130 17:30:04.051596 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae50e63c-8d14-4773-85f7-1deaaee40da6" path="/var/lib/kubelet/pods/ae50e63c-8d14-4773-85f7-1deaaee40da6/volumes" Jan 30 17:30:09 crc kubenswrapper[4766]: I0130 17:30:09.045697 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:30:09 crc kubenswrapper[4766]: I0130 17:30:09.046302 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:30:30 crc kubenswrapper[4766]: I0130 17:30:30.824510 4766 scope.go:117] "RemoveContainer" containerID="8dd7d74e3c7ee802070a55313e5ed776854ad2a4f3bbdd635c4f840d40fcfbc2" Jan 30 17:30:39 crc kubenswrapper[4766]: I0130 17:30:39.045758 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:30:39 crc kubenswrapper[4766]: I0130 17:30:39.046362 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.045860 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.046542 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.046598 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.047231 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:31:09 crc kubenswrapper[4766]: I0130 17:31:09.047287 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" gracePeriod=600 Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.158521 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" exitCode=0 Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.158597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c"} Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.159107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} Jan 30 17:31:10 crc kubenswrapper[4766]: I0130 17:31:10.159131 4766 scope.go:117] "RemoveContainer" containerID="8257227168db12c5ef7d2b395f7b8af2a8fe11d391df629780e7c43cc40160f9" Jan 30 17:33:09 crc kubenswrapper[4766]: I0130 17:33:09.045658 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:33:09 crc kubenswrapper[4766]: I0130 17:33:09.047389 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:33:39 crc kubenswrapper[4766]: I0130 17:33:39.045043 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:33:39 crc kubenswrapper[4766]: I0130 17:33:39.047024 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045439 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045931 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.045969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.046378 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.046423 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" gracePeriod=600 Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.453722 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" exitCode=0 Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.453796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68"} Jan 30 17:34:09 crc kubenswrapper[4766]: I0130 17:34:09.454265 4766 scope.go:117] "RemoveContainer" containerID="88a09e8baa31aaef5207c9fcdfb3917d77584174469d09080f844dc7ec4a244c" Jan 30 17:34:09 crc kubenswrapper[4766]: E0130 17:34:09.835010 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:10 crc kubenswrapper[4766]: I0130 17:34:10.462905 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:10 crc kubenswrapper[4766]: E0130 17:34:10.463236 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:24 crc kubenswrapper[4766]: I0130 17:34:24.039858 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:24 crc kubenswrapper[4766]: E0130 17:34:24.040629 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:36 crc kubenswrapper[4766]: I0130 17:34:36.043270 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:36 crc kubenswrapper[4766]: E0130 17:34:36.044315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.121373 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: E0130 17:34:46.122834 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.122853 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.123047 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" containerName="collect-profiles" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.124320 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.144605 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293915 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.293946 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.395996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.396665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.396711 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.429282 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"redhat-operators-sv58z\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.457813 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.680541 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:46 crc kubenswrapper[4766]: I0130 17:34:46.709997 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"7779d57a2f7d39826992cc6bccf7eef3bb9b01a232008a9820c30f1fbd42f046"} Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.718923 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" exitCode=0 Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.718992 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543"} Jan 30 17:34:47 crc kubenswrapper[4766]: I0130 17:34:47.721899 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:34:48 crc kubenswrapper[4766]: I0130 17:34:48.735272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} Jan 30 17:34:49 crc kubenswrapper[4766]: I0130 17:34:49.747900 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" exitCode=0 Jan 30 17:34:49 crc kubenswrapper[4766]: I0130 17:34:49.748013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} Jan 30 17:34:50 crc kubenswrapper[4766]: I0130 17:34:50.759009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerStarted","Data":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} Jan 30 17:34:50 crc kubenswrapper[4766]: I0130 17:34:50.785070 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv58z" podStartSLOduration=2.245876919 podStartE2EDuration="4.785048822s" podCreationTimestamp="2026-01-30 17:34:46 +0000 UTC" firstStartedPulling="2026-01-30 17:34:47.721579755 +0000 UTC m=+4342.359537101" lastFinishedPulling="2026-01-30 17:34:50.260751658 +0000 UTC m=+4344.898709004" observedRunningTime="2026-01-30 17:34:50.779788447 +0000 UTC m=+4345.417745793" watchObservedRunningTime="2026-01-30 17:34:50.785048822 +0000 UTC m=+4345.423006158" Jan 30 17:34:51 crc kubenswrapper[4766]: I0130 17:34:51.039584 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:34:51 crc kubenswrapper[4766]: E0130 17:34:51.039771 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.458799 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.459299 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.504819 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.838542 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:34:56 crc kubenswrapper[4766]: I0130 17:34:56.892632 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:34:58 crc kubenswrapper[4766]: I0130 17:34:58.814212 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv58z" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" containerID="cri-o://2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" gracePeriod=2 Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.264608 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293550 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293619 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.293693 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") pod \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\" (UID: \"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489\") " Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.294782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities" (OuterVolumeSpecName: "utilities") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.300384 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf" (OuterVolumeSpecName: "kube-api-access-kgpwf") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "kube-api-access-kgpwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.395993 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpwf\" (UniqueName: \"kubernetes.io/projected/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-kube-api-access-kgpwf\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.396031 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.412464 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" (UID: "7c6480b5-07cc-4bd3-a1f5-d0ecdf357489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.496733 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834316 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" exitCode=0 Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834372 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv58z" event={"ID":"7c6480b5-07cc-4bd3-a1f5-d0ecdf357489","Type":"ContainerDied","Data":"7779d57a2f7d39826992cc6bccf7eef3bb9b01a232008a9820c30f1fbd42f046"} Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834411 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv58z" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.834422 4766 scope.go:117] "RemoveContainer" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.853227 4766 scope.go:117] "RemoveContainer" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.863897 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.870641 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv58z"] Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.898582 4766 scope.go:117] "RemoveContainer" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917268 4766 scope.go:117] "RemoveContainer" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.917818 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": container with ID starting with 2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3 not found: ID does not exist" containerID="2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917864 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3"} err="failed to get container status \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": rpc error: code = NotFound desc = could not find container \"2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3\": container with ID starting with 2fbab93a5123e6e1a496c78d9b54d81039d9b0ba7497e0295164d298e0e012e3 not found: ID does not exist" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.917900 4766 scope.go:117] "RemoveContainer" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.918404 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": container with ID starting with b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5 not found: ID does not exist" containerID="b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918431 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5"} err="failed to get container status \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": rpc error: code = NotFound desc = could not find container \"b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5\": container with ID starting with b7ee6a4394b9a3e98dbaf75f318c4ad67ba5e1a0923718f427fbb75531398ef5 not found: ID does not exist" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918450 4766 scope.go:117] "RemoveContainer" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: E0130 17:35:00.918751 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": container with ID starting with 3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543 not found: ID does not exist" containerID="3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543" Jan 30 17:35:00 crc kubenswrapper[4766]: I0130 17:35:00.918791 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543"} err="failed to get container status \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": rpc error: code = NotFound desc = could not find container \"3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543\": container with ID starting with 3019bdb6b267537a12f194a546eb47abe65ea4880b5cd19edecdfb6ede31f543 not found: ID does not exist" Jan 30 17:35:02 crc kubenswrapper[4766]: I0130 17:35:02.047730 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" path="/var/lib/kubelet/pods/7c6480b5-07cc-4bd3-a1f5-d0ecdf357489/volumes" Jan 30 17:35:05 crc kubenswrapper[4766]: I0130 17:35:05.039720 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:05 crc kubenswrapper[4766]: E0130 17:35:05.039964 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.070885 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071316 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-content" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071335 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-content" Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071363 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-utilities" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071371 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="extract-utilities" Jan 30 17:35:06 crc kubenswrapper[4766]: E0130 17:35:06.071394 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071401 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.071572 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6480b5-07cc-4bd3-a1f5-d0ecdf357489" containerName="registry-server" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.072772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073472 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073617 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.073708 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.086623 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174444 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174461 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.174875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.175028 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.198886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"community-operators-5wb7s\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.395871 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.692403 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:06 crc kubenswrapper[4766]: I0130 17:35:06.874529 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"b12c1b39c79da2097cf82447c715692db38883222baa6093ec2dc5ab0047733d"} Jan 30 17:35:07 crc kubenswrapper[4766]: I0130 17:35:07.882233 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" exitCode=0 Jan 30 17:35:07 crc kubenswrapper[4766]: I0130 17:35:07.882277 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70"} Jan 30 17:35:08 crc kubenswrapper[4766]: I0130 17:35:08.891306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} Jan 30 17:35:09 crc kubenswrapper[4766]: I0130 17:35:09.898662 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" exitCode=0 Jan 30 17:35:09 crc kubenswrapper[4766]: I0130 17:35:09.898907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} Jan 30 17:35:10 crc kubenswrapper[4766]: I0130 17:35:10.908251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerStarted","Data":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} Jan 30 17:35:10 crc kubenswrapper[4766]: I0130 17:35:10.931162 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wb7s" podStartSLOduration=2.451162638 podStartE2EDuration="4.931138148s" podCreationTimestamp="2026-01-30 17:35:06 +0000 UTC" firstStartedPulling="2026-01-30 17:35:07.884072653 +0000 UTC m=+4362.522029999" lastFinishedPulling="2026-01-30 17:35:10.364048163 +0000 UTC m=+4365.002005509" observedRunningTime="2026-01-30 17:35:10.925715679 +0000 UTC m=+4365.563673025" watchObservedRunningTime="2026-01-30 17:35:10.931138148 +0000 UTC m=+4365.569095494" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.397203 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.397562 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.444460 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:16 crc kubenswrapper[4766]: I0130 17:35:16.997425 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:17 crc kubenswrapper[4766]: I0130 17:35:17.039598 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:17 crc kubenswrapper[4766]: E0130 17:35:17.040132 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:17 crc kubenswrapper[4766]: I0130 17:35:17.317236 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:18 crc kubenswrapper[4766]: I0130 17:35:18.971616 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wb7s" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" containerID="cri-o://f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" gracePeriod=2 Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.844483 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.981134 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982310 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities" (OuterVolumeSpecName: "utilities") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982407 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982449 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") pod \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\" (UID: \"94bf4dd2-3bf6-4429-a387-5cc19fadf159\") " Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.982700 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989199 4766 generic.go:334] "Generic (PLEG): container finished" podID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" exitCode=0 Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989360 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wb7s" event={"ID":"94bf4dd2-3bf6-4429-a387-5cc19fadf159","Type":"ContainerDied","Data":"b12c1b39c79da2097cf82447c715692db38883222baa6093ec2dc5ab0047733d"} Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989396 4766 scope.go:117] "RemoveContainer" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.989413 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wb7s" Jan 30 17:35:19 crc kubenswrapper[4766]: I0130 17:35:19.991260 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw" (OuterVolumeSpecName: "kube-api-access-qtbmw") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "kube-api-access-qtbmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.032469 4766 scope.go:117] "RemoveContainer" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.053598 4766 scope.go:117] "RemoveContainer" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.084622 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbmw\" (UniqueName: \"kubernetes.io/projected/94bf4dd2-3bf6-4429-a387-5cc19fadf159-kube-api-access-qtbmw\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.087666 4766 scope.go:117] "RemoveContainer" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.088196 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": container with ID starting with f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed not found: ID does not exist" containerID="f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088265 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed"} err="failed to get container status \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": rpc error: code = NotFound desc = could not find container \"f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed\": container with ID starting with f0e5722bd70c78c44b76a49c782e9f6cb3cd0bf57cb393e2198ca2b5697ce2ed not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088305 4766 scope.go:117] "RemoveContainer" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.088673 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": container with ID starting with 69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956 not found: ID does not exist" containerID="69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956"} err="failed to get container status \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": rpc error: code = NotFound desc = could not find container \"69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956\": container with ID starting with 69c1ebac4e1d751b165ee5d8b42388e508e15e90175d6241e2f344ff246ae956 not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.088756 4766 scope.go:117] "RemoveContainer" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: E0130 17:35:20.089173 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": container with ID starting with 8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70 not found: ID does not exist" containerID="8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.089282 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70"} err="failed to get container status \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": rpc error: code = NotFound desc = could not find container \"8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70\": container with ID starting with 8f53263ef0f45f5465e08466c5c75079b529bfc1a1881f2675dd3b9de746cb70 not found: ID does not exist" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.111805 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94bf4dd2-3bf6-4429-a387-5cc19fadf159" (UID: "94bf4dd2-3bf6-4429-a387-5cc19fadf159"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.186132 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bf4dd2-3bf6-4429-a387-5cc19fadf159-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.324909 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:20 crc kubenswrapper[4766]: I0130 17:35:20.330308 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wb7s"] Jan 30 17:35:22 crc kubenswrapper[4766]: I0130 17:35:22.068241 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" path="/var/lib/kubelet/pods/94bf4dd2-3bf6-4429-a387-5cc19fadf159/volumes" Jan 30 17:35:29 crc kubenswrapper[4766]: I0130 17:35:29.039286 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:29 crc kubenswrapper[4766]: E0130 17:35:29.040037 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.461748 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462601 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-utilities" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462616 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-utilities" Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462624 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462630 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: E0130 17:35:35.462651 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-content" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462658 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="extract-content" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.462809 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="94bf4dd2-3bf6-4429-a387-5cc19fadf159" containerName="registry-server" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.463853 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.469533 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627498 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.627632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729840 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729934 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.729989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.730456 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.730851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.752379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"redhat-marketplace-hjhvv\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:35 crc kubenswrapper[4766]: I0130 17:35:35.791359 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:36 crc kubenswrapper[4766]: I0130 17:35:36.057847 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:36 crc kubenswrapper[4766]: I0130 17:35:36.095073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"01b91be4a1ae2b19d40d81b37a1373588971cae934410131587b526a172a37bb"} Jan 30 17:35:37 crc kubenswrapper[4766]: I0130 17:35:37.102971 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" exitCode=0 Jan 30 17:35:37 crc kubenswrapper[4766]: I0130 17:35:37.103152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7"} Jan 30 17:35:38 crc kubenswrapper[4766]: I0130 17:35:38.110211 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} Jan 30 17:35:39 crc kubenswrapper[4766]: I0130 17:35:39.119025 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" exitCode=0 Jan 30 17:35:39 crc kubenswrapper[4766]: I0130 17:35:39.119069 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} Jan 30 17:35:40 crc kubenswrapper[4766]: I0130 17:35:40.128897 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerStarted","Data":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} Jan 30 17:35:40 crc kubenswrapper[4766]: I0130 17:35:40.153535 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hjhvv" podStartSLOduration=2.742026708 podStartE2EDuration="5.153515903s" podCreationTimestamp="2026-01-30 17:35:35 +0000 UTC" firstStartedPulling="2026-01-30 17:35:37.105604993 +0000 UTC m=+4391.743562339" lastFinishedPulling="2026-01-30 17:35:39.517094188 +0000 UTC m=+4394.155051534" observedRunningTime="2026-01-30 17:35:40.149209154 +0000 UTC m=+4394.787166520" watchObservedRunningTime="2026-01-30 17:35:40.153515903 +0000 UTC m=+4394.791473249" Jan 30 17:35:43 crc kubenswrapper[4766]: I0130 17:35:43.039789 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:43 crc kubenswrapper[4766]: E0130 17:35:43.040546 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.791875 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.792217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:45 crc kubenswrapper[4766]: I0130 17:35:45.840209 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:46 crc kubenswrapper[4766]: I0130 17:35:46.200413 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:46 crc kubenswrapper[4766]: I0130 17:35:46.243424 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.179677 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hjhvv" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" containerID="cri-o://ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" gracePeriod=2 Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.570067 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.730797 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.730878 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.731034 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") pod \"575b9005-6dc0-455d-8097-a165628fd850\" (UID: \"575b9005-6dc0-455d-8097-a165628fd850\") " Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.732137 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities" (OuterVolumeSpecName: "utilities") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.736749 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt" (OuterVolumeSpecName: "kube-api-access-8zktt") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "kube-api-access-8zktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.756548 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "575b9005-6dc0-455d-8097-a165628fd850" (UID: "575b9005-6dc0-455d-8097-a165628fd850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833050 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833096 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zktt\" (UniqueName: \"kubernetes.io/projected/575b9005-6dc0-455d-8097-a165628fd850-kube-api-access-8zktt\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:48 crc kubenswrapper[4766]: I0130 17:35:48.833107 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575b9005-6dc0-455d-8097-a165628fd850-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187700 4766 generic.go:334] "Generic (PLEG): container finished" podID="575b9005-6dc0-455d-8097-a165628fd850" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" exitCode=0 Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187774 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjhvv" event={"ID":"575b9005-6dc0-455d-8097-a165628fd850","Type":"ContainerDied","Data":"01b91be4a1ae2b19d40d81b37a1373588971cae934410131587b526a172a37bb"} Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187774 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjhvv" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.187795 4766 scope.go:117] "RemoveContainer" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.202791 4766 scope.go:117] "RemoveContainer" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.219095 4766 scope.go:117] "RemoveContainer" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.224376 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.232711 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjhvv"] Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.246961 4766 scope.go:117] "RemoveContainer" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.247605 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": container with ID starting with ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054 not found: ID does not exist" containerID="ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.247725 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054"} err="failed to get container status \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": rpc error: code = NotFound desc = could not find container \"ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054\": container with ID starting with ea25da4a1e9ae2c42a265e7d70506aaae1b5e2d2009cff423c884a1e78c3a054 not found: ID does not exist" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.247753 4766 scope.go:117] "RemoveContainer" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.248148 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": container with ID starting with d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b not found: ID does not exist" containerID="d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248200 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b"} err="failed to get container status \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": rpc error: code = NotFound desc = could not find container \"d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b\": container with ID starting with d21171451e4bb484c0205be267627694bb27b0f78e9366d9d7f3eae330f25e5b not found: ID does not exist" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248226 4766 scope.go:117] "RemoveContainer" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: E0130 17:35:49.248561 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": container with ID starting with 74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7 not found: ID does not exist" containerID="74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7" Jan 30 17:35:49 crc kubenswrapper[4766]: I0130 17:35:49.248588 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7"} err="failed to get container status \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": rpc error: code = NotFound desc = could not find container \"74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7\": container with ID starting with 74a144af1554b6fb622d7415dcfa4e99432537d5df2b9a4266fc0cfe6bf31bd7 not found: ID does not exist" Jan 30 17:35:50 crc kubenswrapper[4766]: I0130 17:35:50.048665 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575b9005-6dc0-455d-8097-a165628fd850" path="/var/lib/kubelet/pods/575b9005-6dc0-455d-8097-a165628fd850/volumes" Jan 30 17:35:55 crc kubenswrapper[4766]: I0130 17:35:55.039233 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:35:55 crc kubenswrapper[4766]: E0130 17:35:55.040935 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:09 crc kubenswrapper[4766]: I0130 17:36:09.039800 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:09 crc kubenswrapper[4766]: E0130 17:36:09.040544 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:24 crc kubenswrapper[4766]: I0130 17:36:24.039499 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:24 crc kubenswrapper[4766]: E0130 17:36:24.040289 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:35 crc kubenswrapper[4766]: I0130 17:36:35.040032 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:35 crc kubenswrapper[4766]: E0130 17:36:35.040758 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:36:47 crc kubenswrapper[4766]: I0130 17:36:47.040081 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:36:47 crc kubenswrapper[4766]: E0130 17:36:47.041058 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:01 crc kubenswrapper[4766]: I0130 17:37:01.040294 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:01 crc kubenswrapper[4766]: E0130 17:37:01.041104 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:15 crc kubenswrapper[4766]: I0130 17:37:15.039321 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:15 crc kubenswrapper[4766]: E0130 17:37:15.040048 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:17 crc kubenswrapper[4766]: I0130 17:37:17.884239 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 17:37:17 crc kubenswrapper[4766]: I0130 17:37:17.890482 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-mxw77"] Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.016779 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017060 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-content" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017094 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-content" Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017107 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-utilities" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017115 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="extract-utilities" Jan 30 17:37:18 crc kubenswrapper[4766]: E0130 17:37:18.017132 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017137 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017292 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="575b9005-6dc0-455d-8097-a165628fd850" containerName="registry-server" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.017802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.020045 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021235 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021267 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.021547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.029108 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.059401 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad5692e-34c5-4e32-ba96-cd5e6e617c62" path="/var/lib/kubelet/pods/3ad5692e-34c5-4e32-ba96-cd5e6e617c62/volumes" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.122473 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.123168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.123383 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225115 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225541 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225656 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225773 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.225886 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.258422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"crc-storage-crc-9vbc9\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.347593 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:18 crc kubenswrapper[4766]: I0130 17:37:18.816940 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817382 4766 generic.go:334] "Generic (PLEG): container finished" podID="b1d0287e-07c6-4924-85de-701d0ff03488" containerID="3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262" exitCode=0 Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817477 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerDied","Data":"3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262"} Jan 30 17:37:19 crc kubenswrapper[4766]: I0130 17:37:19.817731 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerStarted","Data":"c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a"} Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.145364 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.271622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272091 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") pod \"b1d0287e-07c6-4924-85de-701d0ff03488\" (UID: \"b1d0287e-07c6-4924-85de-701d0ff03488\") " Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272305 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.272589 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b1d0287e-07c6-4924-85de-701d0ff03488-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.279931 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8" (OuterVolumeSpecName: "kube-api-access-4qzm8") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "kube-api-access-4qzm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.298511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b1d0287e-07c6-4924-85de-701d0ff03488" (UID: "b1d0287e-07c6-4924-85de-701d0ff03488"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.374333 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b1d0287e-07c6-4924-85de-701d0ff03488-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.374600 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qzm8\" (UniqueName: \"kubernetes.io/projected/b1d0287e-07c6-4924-85de-701d0ff03488-kube-api-access-4qzm8\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831595 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-9vbc9" event={"ID":"b1d0287e-07c6-4924-85de-701d0ff03488","Type":"ContainerDied","Data":"c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a"} Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831920 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ad9955f9c9492351f5f634289f5868d65bcfba8c44923b9c7ee46fe2179e5a" Jan 30 17:37:21 crc kubenswrapper[4766]: I0130 17:37:21.831648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-9vbc9" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.281650 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.286856 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-9vbc9"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.416955 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:23 crc kubenswrapper[4766]: E0130 17:37:23.417488 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.417526 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.417745 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" containerName="storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.418733 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421818 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421845 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.421880 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.423491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r8skn" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.431732 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603307 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.603345 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705496 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705532 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.705845 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.706288 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.734285 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"crc-storage-crc-nmc5q\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:23 crc kubenswrapper[4766]: I0130 17:37:23.746690 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.049392 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d0287e-07c6-4924-85de-701d0ff03488" path="/var/lib/kubelet/pods/b1d0287e-07c6-4924-85de-701d0ff03488/volumes" Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.247266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nmc5q"] Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861229 4766 generic.go:334] "Generic (PLEG): container finished" podID="554cf476-6d37-432b-826d-9a1094b73f78" containerID="bc2fc12d9fb98dc06beb3fcccccec9dd09eda88527c87c3e7ef793da23ffc25f" exitCode=0 Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerDied","Data":"bc2fc12d9fb98dc06beb3fcccccec9dd09eda88527c87c3e7ef793da23ffc25f"} Jan 30 17:37:24 crc kubenswrapper[4766]: I0130 17:37:24.861638 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerStarted","Data":"e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6"} Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.138791 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.257741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") pod \"554cf476-6d37-432b-826d-9a1094b73f78\" (UID: \"554cf476-6d37-432b-826d-9a1094b73f78\") " Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258631 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.258891 4766 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/554cf476-6d37-432b-826d-9a1094b73f78-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.263036 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf" (OuterVolumeSpecName: "kube-api-access-n2xxf") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "kube-api-access-n2xxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.275320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "554cf476-6d37-432b-826d-9a1094b73f78" (UID: "554cf476-6d37-432b-826d-9a1094b73f78"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.360908 4766 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/554cf476-6d37-432b-826d-9a1094b73f78-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.361219 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xxf\" (UniqueName: \"kubernetes.io/projected/554cf476-6d37-432b-826d-9a1094b73f78-kube-api-access-n2xxf\") on node \"crc\" DevicePath \"\"" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nmc5q" event={"ID":"554cf476-6d37-432b-826d-9a1094b73f78","Type":"ContainerDied","Data":"e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6"} Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878803 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nmc5q" Jan 30 17:37:26 crc kubenswrapper[4766]: I0130 17:37:26.878977 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e532aba2a8b327b4d07974fc0b2b133d4749f14e3fabd7cf5d5ad5417408d2b6" Jan 30 17:37:27 crc kubenswrapper[4766]: I0130 17:37:27.040348 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:27 crc kubenswrapper[4766]: E0130 17:37:27.040775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:31 crc kubenswrapper[4766]: I0130 17:37:31.225311 4766 scope.go:117] "RemoveContainer" containerID="403a056677f3371b0fbc8b04190fc4d600537695442bf6a2adce1bee6fee4304" Jan 30 17:37:38 crc kubenswrapper[4766]: I0130 17:37:38.040363 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:38 crc kubenswrapper[4766]: E0130 17:37:38.040908 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:37:51 crc kubenswrapper[4766]: I0130 17:37:51.040036 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:37:51 crc kubenswrapper[4766]: E0130 17:37:51.040920 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:06 crc kubenswrapper[4766]: I0130 17:38:06.043528 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:06 crc kubenswrapper[4766]: E0130 17:38:06.044373 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:17 crc kubenswrapper[4766]: I0130 17:38:17.040454 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:17 crc kubenswrapper[4766]: E0130 17:38:17.041481 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:32 crc kubenswrapper[4766]: I0130 17:38:32.040114 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:32 crc kubenswrapper[4766]: E0130 17:38:32.040845 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:43 crc kubenswrapper[4766]: I0130 17:38:43.038974 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:43 crc kubenswrapper[4766]: E0130 17:38:43.039709 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.844073 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:45 crc kubenswrapper[4766]: E0130 17:38:45.846558 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.846574 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.846713 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="554cf476-6d37-432b-826d-9a1094b73f78" containerName="storage" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.848590 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.854496 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977118 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977204 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:45 crc kubenswrapper[4766]: I0130 17:38:45.977236 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.079100 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.080037 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.080604 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.107104 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"certified-operators-c6jjm\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.175578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:46 crc kubenswrapper[4766]: I0130 17:38:46.655699 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.463770 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" exitCode=0 Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.463878 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d"} Jan 30 17:38:47 crc kubenswrapper[4766]: I0130 17:38:47.464080 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerStarted","Data":"67b5cb77ed90276d3ad55c1d03d0e6cb3bcec17521689084489af36ee219355e"} Jan 30 17:38:49 crc kubenswrapper[4766]: I0130 17:38:49.483623 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" exitCode=0 Jan 30 17:38:49 crc kubenswrapper[4766]: I0130 17:38:49.483718 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45"} Jan 30 17:38:50 crc kubenswrapper[4766]: I0130 17:38:50.494067 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerStarted","Data":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} Jan 30 17:38:50 crc kubenswrapper[4766]: I0130 17:38:50.516375 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6jjm" podStartSLOduration=3.059169837 podStartE2EDuration="5.516352827s" podCreationTimestamp="2026-01-30 17:38:45 +0000 UTC" firstStartedPulling="2026-01-30 17:38:47.466258705 +0000 UTC m=+4582.104216051" lastFinishedPulling="2026-01-30 17:38:49.923441695 +0000 UTC m=+4584.561399041" observedRunningTime="2026-01-30 17:38:50.512684407 +0000 UTC m=+4585.150641773" watchObservedRunningTime="2026-01-30 17:38:50.516352827 +0000 UTC m=+4585.154310173" Jan 30 17:38:55 crc kubenswrapper[4766]: I0130 17:38:55.039649 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:38:55 crc kubenswrapper[4766]: E0130 17:38:55.040547 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.175864 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.175939 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.222124 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.581147 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:56 crc kubenswrapper[4766]: I0130 17:38:56.624962 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:58 crc kubenswrapper[4766]: I0130 17:38:58.548646 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6jjm" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" containerID="cri-o://19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" gracePeriod=2 Jan 30 17:38:58 crc kubenswrapper[4766]: I0130 17:38:58.918124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064168 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064271 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.064355 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") pod \"5b2a422f-876d-4faa-9195-7dabd362b052\" (UID: \"5b2a422f-876d-4faa-9195-7dabd362b052\") " Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.065418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities" (OuterVolumeSpecName: "utilities") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.069356 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5" (OuterVolumeSpecName: "kube-api-access-2dhc5") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "kube-api-access-2dhc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.118926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b2a422f-876d-4faa-9195-7dabd362b052" (UID: "5b2a422f-876d-4faa-9195-7dabd362b052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166395 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166456 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhc5\" (UniqueName: \"kubernetes.io/projected/5b2a422f-876d-4faa-9195-7dabd362b052-kube-api-access-2dhc5\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.166471 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a422f-876d-4faa-9195-7dabd362b052-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556882 4766 generic.go:334] "Generic (PLEG): container finished" podID="5b2a422f-876d-4faa-9195-7dabd362b052" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" exitCode=0 Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556928 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6jjm" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.556931 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.557050 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6jjm" event={"ID":"5b2a422f-876d-4faa-9195-7dabd362b052","Type":"ContainerDied","Data":"67b5cb77ed90276d3ad55c1d03d0e6cb3bcec17521689084489af36ee219355e"} Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.557071 4766 scope.go:117] "RemoveContainer" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.579146 4766 scope.go:117] "RemoveContainer" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.595358 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.601624 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6jjm"] Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.816093 4766 scope.go:117] "RemoveContainer" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.930837 4766 scope.go:117] "RemoveContainer" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.931482 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": container with ID starting with 19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91 not found: ID does not exist" containerID="19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.931535 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91"} err="failed to get container status \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": rpc error: code = NotFound desc = could not find container \"19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91\": container with ID starting with 19c2330983a2ffc0a839122354ec20f00b31bbc358acf991745d966472b55d91 not found: ID does not exist" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.931562 4766 scope.go:117] "RemoveContainer" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.932053 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": container with ID starting with 0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45 not found: ID does not exist" containerID="0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932131 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45"} err="failed to get container status \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": rpc error: code = NotFound desc = could not find container \"0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45\": container with ID starting with 0a8041e29ba4595714d141896afbe542d929ac5f85bebdedc28e6759861cdc45 not found: ID does not exist" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932158 4766 scope.go:117] "RemoveContainer" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: E0130 17:38:59.932510 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": container with ID starting with 7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d not found: ID does not exist" containerID="7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d" Jan 30 17:38:59 crc kubenswrapper[4766]: I0130 17:38:59.932535 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d"} err="failed to get container status \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": rpc error: code = NotFound desc = could not find container \"7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d\": container with ID starting with 7c62846fb9a06c1fe31068f56ff3093499fc642888219760f85c70134964db1d not found: ID does not exist" Jan 30 17:39:00 crc kubenswrapper[4766]: I0130 17:39:00.051709 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" path="/var/lib/kubelet/pods/5b2a422f-876d-4faa-9195-7dabd362b052/volumes" Jan 30 17:39:08 crc kubenswrapper[4766]: I0130 17:39:08.039617 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:39:08 crc kubenswrapper[4766]: E0130 17:39:08.040400 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:39:22 crc kubenswrapper[4766]: I0130 17:39:22.039496 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:39:22 crc kubenswrapper[4766]: I0130 17:39:22.701706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.012958 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014152 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-content" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014168 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-content" Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014286 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-utilities" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014295 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="extract-utilities" Jan 30 17:40:45 crc kubenswrapper[4766]: E0130 17:40:45.014309 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014316 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.014478 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2a422f-876d-4faa-9195-7dabd362b052" containerName="registry-server" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.015326 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.018635 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.018910 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019058 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019499 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.019753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gwzhk" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.029026 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.093831 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.094112 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.094344 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195505 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195609 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.195633 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.196464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.196500 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.217825 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"dnsmasq-dns-5d7b5456f5-kl2j6\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.294397 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.296356 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.328647 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.386669 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.398763 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.399154 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.399281 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.500710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.501445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.501687 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.523290 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"dnsmasq-dns-98ddfc8f-ht8gm\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.625899 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.905280 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:45 crc kubenswrapper[4766]: W0130 17:40:45.916585 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2586ecd_ab78_47e4_931c_d0a872a4a404.slice/crio-5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738 WatchSource:0}: Error finding container 5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738: Status 404 returned error can't find the container with id 5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738 Jan 30 17:40:45 crc kubenswrapper[4766]: I0130 17:40:45.920077 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:40:45 crc kubenswrapper[4766]: W0130 17:40:45.930828 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ec757c0_9d3d_4d66_9cd8_742105f2c48e.slice/crio-93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769 WatchSource:0}: Error finding container 93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769: Status 404 returned error can't find the container with id 93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.179881 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.181360 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184448 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184462 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.184744 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.185215 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.185529 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7fqzb" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.195471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.215923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216549 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216673 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216751 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.216853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217116 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217216 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.217320 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.218356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261298 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerID="987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5" exitCode=0 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.261432 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerStarted","Data":"93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263508 4766 generic.go:334] "Generic (PLEG): container finished" podID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" exitCode=0 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.263582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerStarted","Data":"5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738"} Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319841 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319887 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319946 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.319971 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320018 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320094 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320165 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.320210 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.322118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.322884 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.323157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.323242 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.324973 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.325632 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.325662 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ee0fb1f36d21ee32de31c2c1b35f1f2033c96e9c0c8d1603b6b408ac3d6223f/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.326481 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.328630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.340013 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.379547 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.453436 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.512787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.514219 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520241 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520607 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520644 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.520786 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bz89s" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.557400 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.571480 4766 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 17:40:46 crc kubenswrapper[4766]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 17:40:46 crc kubenswrapper[4766]: > podSandboxID="5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738" Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.571639 4766 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 17:40:46 crc kubenswrapper[4766]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mndkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d7b5456f5-kl2j6_openstack(d2586ecd-ab78-47e4-931c-d0a872a4a404): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 17:40:46 crc kubenswrapper[4766]: > logger="UnhandledError" Jan 30 17:40:46 crc kubenswrapper[4766]: E0130 17:40:46.573163 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.623976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624054 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624099 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624413 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624683 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624733 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.624767 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725785 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725831 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725875 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725940 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.725991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726009 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.726466 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.727088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.727602 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.728402 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.728430 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a67480c1d51246343d54cce22ecd2529a760cf02f3b5a31cca902016f15d50c3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.731728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.731846 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.733113 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.747655 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.754344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.902219 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: W0130 17:40:46.905009 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e8a2d07_a10c_454f_b5f0_d5fb399de3dc.slice/crio-b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238 WatchSource:0}: Error finding container b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238: Status 404 returned error can't find the container with id b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238 Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.905624 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.943477 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.947108 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.953849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.960849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.961839 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.963064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gl758" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.973654 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 17:40:46 crc kubenswrapper[4766]: I0130 17:40:46.974143 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129875 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129923 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.129951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130032 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130068 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130103 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.130137 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.228557 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.229711 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231068 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231138 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.231348 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.233740 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.234979 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e546c4-803f-4379-b5fb-de5ec7f0c79f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.238506 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.239287 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.239317 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/20b0f9adc7994a8fd90688a9d6ad7010a4d3c43b63679c705cf315abd13682e6/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.240675 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e546c4-803f-4379-b5fb-de5ec7f0c79f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.244786 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.250717 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kvd5j" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.251415 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.272320 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsw5k\" (UniqueName: \"kubernetes.io/projected/57e546c4-803f-4379-b5fb-de5ec7f0c79f-kube-api-access-rsw5k\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.276503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerStarted","Data":"a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb"} Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.276591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.279665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238"} Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334221 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.334265 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.346646 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" podStartSLOduration=2.346628006 podStartE2EDuration="2.346628006s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:47.335242285 +0000 UTC m=+4701.973199631" watchObservedRunningTime="2026-01-30 17:40:47.346628006 +0000 UTC m=+4701.984585342" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.420372 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d421ee13-9b44-4636-a518-49cd48f7f9a4\") pod \"openstack-galera-0\" (UID: \"57e546c4-803f-4379-b5fb-de5ec7f0c79f\") " pod="openstack/openstack-galera-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436369 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436467 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.436522 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.438721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kolla-config\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.439225 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08baa9d0-2942-4a73-a75a-d13dc2148bb0-config-data\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.471310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.481872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggnm\" (UniqueName: \"kubernetes.io/projected/08baa9d0-2942-4a73-a75a-d13dc2148bb0-kube-api-access-vggnm\") pod \"memcached-0\" (UID: \"08baa9d0-2942-4a73-a75a-d13dc2148bb0\") " pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.548566 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:40:47 crc kubenswrapper[4766]: I0130 17:40:47.571823 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.037057 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: W0130 17:40:48.044245 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57e546c4_803f_4379_b5fb_de5ec7f0c79f.slice/crio-e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7 WatchSource:0}: Error finding container e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7: Status 404 returned error can't find the container with id e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7 Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.075787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: W0130 17:40:48.081585 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08baa9d0_2942_4a73_a75a_d13dc2148bb0.slice/crio-c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec WatchSource:0}: Error finding container c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec: Status 404 returned error can't find the container with id c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.287269 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"a6bfba4c8f09a9b72e6500c3cd5b8a4d9dd328a59974eb580780494c99cc6fcc"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.288856 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"08baa9d0-2942-4a73-a75a-d13dc2148bb0","Type":"ContainerStarted","Data":"3f5c0de07e7479d50cce8d395f10ab302ea61264980440c7d83b992af8af828d"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.288900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"08baa9d0-2942-4a73-a75a-d13dc2148bb0","Type":"ContainerStarted","Data":"c72b9a5ccc020969707248fe4e3f8b932ea925d284ea3a8e91ffc266790f42ec"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.289001 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.290680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.292953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerStarted","Data":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.293457 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.295710 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.295734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"e437034f89a988c0824416ec3ba988893cd0a78074c94f4865126c5e418923d7"} Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.325837 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.325815782 podStartE2EDuration="1.325815782s" podCreationTimestamp="2026-01-30 17:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:48.319058127 +0000 UTC m=+4702.957015473" watchObservedRunningTime="2026-01-30 17:40:48.325815782 +0000 UTC m=+4702.963773128" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.384496 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podStartSLOduration=4.384474685 podStartE2EDuration="4.384474685s" podCreationTimestamp="2026-01-30 17:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:48.382723376 +0000 UTC m=+4703.020680722" watchObservedRunningTime="2026-01-30 17:40:48.384474685 +0000 UTC m=+4703.022432031" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.609841 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.612792 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.616719 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lqsbw" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617271 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.617713 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.623305 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.759800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760089 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760153 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760235 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760269 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.760304 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862283 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862310 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862344 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.862864 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7c586850-0ed6-4949-9087-0e66405455ce-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863087 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863387 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.863804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c586850-0ed6-4949-9087-0e66405455ce-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866557 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866590 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce7edaffb00e2bd79e7a1aa3a5ee9c0ee7a7f7940e757f6576a1ec1da2cd53f3/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.866951 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.868394 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c586850-0ed6-4949-9087-0e66405455ce-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.893114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87w4\" (UniqueName: \"kubernetes.io/projected/7c586850-0ed6-4949-9087-0e66405455ce-kube-api-access-j87w4\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.897546 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d1f3819-8bc9-4ca2-acf6-d7be73c344d1\") pod \"openstack-cell1-galera-0\" (UID: \"7c586850-0ed6-4949-9087-0e66405455ce\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:48 crc kubenswrapper[4766]: I0130 17:40:48.933519 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:49 crc kubenswrapper[4766]: I0130 17:40:49.302827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c"} Jan 30 17:40:49 crc kubenswrapper[4766]: I0130 17:40:49.387700 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:40:49 crc kubenswrapper[4766]: W0130 17:40:49.388506 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c586850_0ed6_4949_9087_0e66405455ce.slice/crio-dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14 WatchSource:0}: Error finding container dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14: Status 404 returned error can't find the container with id dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14 Jan 30 17:40:50 crc kubenswrapper[4766]: I0130 17:40:50.316300 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1"} Jan 30 17:40:50 crc kubenswrapper[4766]: I0130 17:40:50.316797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"dc93468d21c221aa92565bb5533f4f61f5083ceac44bcfbb73d5e116a7cf9d14"} Jan 30 17:40:52 crc kubenswrapper[4766]: I0130 17:40:52.331428 4766 generic.go:334] "Generic (PLEG): container finished" podID="57e546c4-803f-4379-b5fb-de5ec7f0c79f" containerID="3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6" exitCode=0 Jan 30 17:40:52 crc kubenswrapper[4766]: I0130 17:40:52.331507 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerDied","Data":"3d2175d8409e41a53fb147d7f034704e23413198ecfec42d0e06c440e7ce21a6"} Jan 30 17:40:53 crc kubenswrapper[4766]: I0130 17:40:53.340388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e546c4-803f-4379-b5fb-de5ec7f0c79f","Type":"ContainerStarted","Data":"45984f6374a5f85fca8559d6af13242174c7dbe17d36d867af8a33da7b1e938e"} Jan 30 17:40:53 crc kubenswrapper[4766]: I0130 17:40:53.358777 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.358756543 podStartE2EDuration="8.358756543s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:53.357960831 +0000 UTC m=+4707.995918177" watchObservedRunningTime="2026-01-30 17:40:53.358756543 +0000 UTC m=+4707.996713889" Jan 30 17:40:54 crc kubenswrapper[4766]: I0130 17:40:54.350043 4766 generic.go:334] "Generic (PLEG): container finished" podID="7c586850-0ed6-4949-9087-0e66405455ce" containerID="ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1" exitCode=0 Jan 30 17:40:54 crc kubenswrapper[4766]: I0130 17:40:54.350345 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerDied","Data":"ca1b02c90db7d4d28988f0f88956689e5c7275ed839e19f3bcbb29fb897fb0a1"} Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.357270 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7c586850-0ed6-4949-9087-0e66405455ce","Type":"ContainerStarted","Data":"ec1fc88488de61b67d6907b2c45b40de972cade7708c22780797640de9ebe4c4"} Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.384700 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.38467738 podStartE2EDuration="8.38467738s" podCreationTimestamp="2026-01-30 17:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:40:55.378141381 +0000 UTC m=+4710.016098747" watchObservedRunningTime="2026-01-30 17:40:55.38467738 +0000 UTC m=+4710.022634736" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.389217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.628510 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:40:55 crc kubenswrapper[4766]: I0130 17:40:55.676360 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.364676 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" containerID="cri-o://50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" gracePeriod=10 Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.788752 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889626 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889738 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.889833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") pod \"d2586ecd-ab78-47e4-931c-d0a872a4a404\" (UID: \"d2586ecd-ab78-47e4-931c-d0a872a4a404\") " Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.900986 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq" (OuterVolumeSpecName: "kube-api-access-mndkq") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "kube-api-access-mndkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.931961 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.933706 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config" (OuterVolumeSpecName: "config") pod "d2586ecd-ab78-47e4-931c-d0a872a4a404" (UID: "d2586ecd-ab78-47e4-931c-d0a872a4a404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991301 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991357 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2586ecd-ab78-47e4-931c-d0a872a4a404-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:56 crc kubenswrapper[4766]: I0130 17:40:56.991368 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndkq\" (UniqueName: \"kubernetes.io/projected/d2586ecd-ab78-47e4-931c-d0a872a4a404-kube-api-access-mndkq\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373065 4766 generic.go:334] "Generic (PLEG): container finished" podID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" exitCode=0 Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373127 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kl2j6" event={"ID":"d2586ecd-ab78-47e4-931c-d0a872a4a404","Type":"ContainerDied","Data":"5573cc16d953ef3d82ed20166e2300bea2dfbd9f1a7a8acbc6d631fab91e0738"} Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.373275 4766 scope.go:117] "RemoveContainer" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.391720 4766 scope.go:117] "RemoveContainer" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.406532 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.412453 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kl2j6"] Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.429712 4766 scope.go:117] "RemoveContainer" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: E0130 17:40:57.430355 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": container with ID starting with 50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae not found: ID does not exist" containerID="50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.430399 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae"} err="failed to get container status \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": rpc error: code = NotFound desc = could not find container \"50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae\": container with ID starting with 50e4458bfe967c551ed69e8a0c74db33b10da1bd35f70c27724fb6f1a38673ae not found: ID does not exist" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.430429 4766 scope.go:117] "RemoveContainer" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: E0130 17:40:57.431050 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": container with ID starting with 836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584 not found: ID does not exist" containerID="836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.431103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584"} err="failed to get container status \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": rpc error: code = NotFound desc = could not find container \"836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584\": container with ID starting with 836cec7dda9e74db703ca5d380e514f9a9e8c88ed4781ac11483a5c4f06b7584 not found: ID does not exist" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.550412 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.573530 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.574620 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 17:40:57 crc kubenswrapper[4766]: I0130 17:40:57.647501 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.051372 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" path="/var/lib/kubelet/pods/d2586ecd-ab78-47e4-931c-d0a872a4a404/volumes" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.454807 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.933642 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:58 crc kubenswrapper[4766]: I0130 17:40:58.933713 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:59 crc kubenswrapper[4766]: I0130 17:40:59.003508 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 17:40:59 crc kubenswrapper[4766]: I0130 17:40:59.454151 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.902846 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:05 crc kubenswrapper[4766]: E0130 17:41:05.903648 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903663 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: E0130 17:41:05.903690 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="init" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903697 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="init" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.903831 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2586ecd-ab78-47e4-931c-d0a872a4a404" containerName="dnsmasq-dns" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.904411 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.908298 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:41:05 crc kubenswrapper[4766]: I0130 17:41:05.911268 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.027779 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.028223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.129443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.129567 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.130419 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.160703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"root-account-create-update-2zjtv\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.227985 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:06 crc kubenswrapper[4766]: I0130 17:41:06.705652 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.465581 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerStarted","Data":"0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4"} Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.465940 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerStarted","Data":"0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778"} Jan 30 17:41:07 crc kubenswrapper[4766]: I0130 17:41:07.485923 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2zjtv" podStartSLOduration=2.485898126 podStartE2EDuration="2.485898126s" podCreationTimestamp="2026-01-30 17:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:07.480098658 +0000 UTC m=+4722.118056004" watchObservedRunningTime="2026-01-30 17:41:07.485898126 +0000 UTC m=+4722.123855472" Jan 30 17:41:08 crc kubenswrapper[4766]: I0130 17:41:08.474074 4766 generic.go:334] "Generic (PLEG): container finished" podID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerID="0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4" exitCode=0 Jan 30 17:41:08 crc kubenswrapper[4766]: I0130 17:41:08.474135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerDied","Data":"0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4"} Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.753487 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.887336 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") pod \"7304e777-31df-44d9-932a-e9dfde1ebad9\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.887910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") pod \"7304e777-31df-44d9-932a-e9dfde1ebad9\" (UID: \"7304e777-31df-44d9-932a-e9dfde1ebad9\") " Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.888564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7304e777-31df-44d9-932a-e9dfde1ebad9" (UID: "7304e777-31df-44d9-932a-e9dfde1ebad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.889367 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7304e777-31df-44d9-932a-e9dfde1ebad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.893462 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd" (OuterVolumeSpecName: "kube-api-access-rnhqd") pod "7304e777-31df-44d9-932a-e9dfde1ebad9" (UID: "7304e777-31df-44d9-932a-e9dfde1ebad9"). InnerVolumeSpecName "kube-api-access-rnhqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:09 crc kubenswrapper[4766]: I0130 17:41:09.989869 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnhqd\" (UniqueName: \"kubernetes.io/projected/7304e777-31df-44d9-932a-e9dfde1ebad9-kube-api-access-rnhqd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487301 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2zjtv" event={"ID":"7304e777-31df-44d9-932a-e9dfde1ebad9","Type":"ContainerDied","Data":"0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778"} Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487345 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea17d040e0fa1847b3a68fc75819ef2d8e63a51206c5f3eb9a83a57a8c64778" Jan 30 17:41:10 crc kubenswrapper[4766]: I0130 17:41:10.487397 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2zjtv" Jan 30 17:41:12 crc kubenswrapper[4766]: I0130 17:41:12.462806 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:12 crc kubenswrapper[4766]: I0130 17:41:12.470405 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2zjtv"] Jan 30 17:41:14 crc kubenswrapper[4766]: I0130 17:41:14.051528 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" path="/var/lib/kubelet/pods/7304e777-31df-44d9-932a-e9dfde1ebad9/volumes" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.491699 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:17 crc kubenswrapper[4766]: E0130 17:41:17.492524 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.492541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.492747 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304e777-31df-44d9-932a-e9dfde1ebad9" containerName="mariadb-account-create-update" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.493397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.496305 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.497504 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.515086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.515429 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.617375 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.617875 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.619168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.640237 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"root-account-create-update-xfq5b\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:17 crc kubenswrapper[4766]: I0130 17:41:17.817295 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.263799 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.538677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerStarted","Data":"a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc"} Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.538730 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerStarted","Data":"3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3"} Jan 30 17:41:18 crc kubenswrapper[4766]: I0130 17:41:18.555445 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xfq5b" podStartSLOduration=1.555423694 podStartE2EDuration="1.555423694s" podCreationTimestamp="2026-01-30 17:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:18.551087085 +0000 UTC m=+4733.189044431" watchObservedRunningTime="2026-01-30 17:41:18.555423694 +0000 UTC m=+4733.193381040" Jan 30 17:41:19 crc kubenswrapper[4766]: I0130 17:41:19.548110 4766 generic.go:334] "Generic (PLEG): container finished" podID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerID="a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc" exitCode=0 Jan 30 17:41:19 crc kubenswrapper[4766]: I0130 17:41:19.548324 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerDied","Data":"a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc"} Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.556912 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerID="b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7" exitCode=0 Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.557100 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7"} Jan 30 17:41:20 crc kubenswrapper[4766]: I0130 17:41:20.924929 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.061309 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") pod \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.061429 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") pod \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\" (UID: \"0e74a4a8-0c9c-4bba-b839-4caeca1e9304\") " Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.062124 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e74a4a8-0c9c-4bba-b839-4caeca1e9304" (UID: "0e74a4a8-0c9c-4bba-b839-4caeca1e9304"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.066614 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f" (OuterVolumeSpecName: "kube-api-access-vp89f") pod "0e74a4a8-0c9c-4bba-b839-4caeca1e9304" (UID: "0e74a4a8-0c9c-4bba-b839-4caeca1e9304"). InnerVolumeSpecName "kube-api-access-vp89f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.162689 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.162736 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp89f\" (UniqueName: \"kubernetes.io/projected/0e74a4a8-0c9c-4bba-b839-4caeca1e9304-kube-api-access-vp89f\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.569799 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerID="5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c" exitCode=0 Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.569955 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.573020 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerStarted","Data":"90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.573364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574721 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xfq5b" event={"ID":"0e74a4a8-0c9c-4bba-b839-4caeca1e9304","Type":"ContainerDied","Data":"3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3"} Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574765 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a81c3d928a5d3f56971ccaef0e640c80858dea47bdca5959804ed5cf15fd0d3" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.574835 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xfq5b" Jan 30 17:41:21 crc kubenswrapper[4766]: I0130 17:41:21.654540 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.654522175 podStartE2EDuration="36.654522175s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:21.648629764 +0000 UTC m=+4736.286587110" watchObservedRunningTime="2026-01-30 17:41:21.654522175 +0000 UTC m=+4736.292479521" Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.584519 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerStarted","Data":"afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae"} Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.584975 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:22 crc kubenswrapper[4766]: I0130 17:41:22.606140 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.606118357 podStartE2EDuration="37.606118357s" podCreationTimestamp="2026-01-30 17:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:22.604271466 +0000 UTC m=+4737.242228822" watchObservedRunningTime="2026-01-30 17:41:22.606118357 +0000 UTC m=+4737.244075703" Jan 30 17:41:36 crc kubenswrapper[4766]: I0130 17:41:36.456657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:41:36 crc kubenswrapper[4766]: I0130 17:41:36.910116 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.045604 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.045995 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.100795 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: E0130 17:41:39.101344 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.101369 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.101734 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" containerName="mariadb-account-create-update" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.103041 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.118659 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.244558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345938 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.345991 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.347108 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.347280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.378348 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"dnsmasq-dns-5b7946d7b9-pmwzk\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.443686 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.715594 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:39 crc kubenswrapper[4766]: I0130 17:41:39.762236 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:41:39 crc kubenswrapper[4766]: W0130 17:41:39.770728 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83b52c39_5b23_4e74_abf9_0018a54b215e.slice/crio-413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199 WatchSource:0}: Error finding container 413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199: Status 404 returned error can't find the container with id 413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199 Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.463411 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729620 4766 generic.go:334] "Generic (PLEG): container finished" podID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" exitCode=0 Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89"} Jan 30 17:41:40 crc kubenswrapper[4766]: I0130 17:41:40.729725 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerStarted","Data":"413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199"} Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.576023 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" containerID="cri-o://90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" gracePeriod=604799 Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.737733 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerStarted","Data":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.737862 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:41 crc kubenswrapper[4766]: I0130 17:41:41.757899 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" podStartSLOduration=2.757882505 podStartE2EDuration="2.757882505s" podCreationTimestamp="2026-01-30 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:41:41.753674981 +0000 UTC m=+4756.391632327" watchObservedRunningTime="2026-01-30 17:41:41.757882505 +0000 UTC m=+4756.395839851" Jan 30 17:41:42 crc kubenswrapper[4766]: I0130 17:41:42.180292 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" containerID="cri-o://afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" gracePeriod=604799 Jan 30 17:41:46 crc kubenswrapper[4766]: I0130 17:41:46.454338 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5672: connect: connection refused" Jan 30 17:41:46 crc kubenswrapper[4766]: I0130 17:41:46.907117 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.240:5672: connect: connection refused" Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.786606 4766 generic.go:334] "Generic (PLEG): container finished" podID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerID="90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" exitCode=0 Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.786693 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24"} Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.790475 4766 generic.go:334] "Generic (PLEG): container finished" podID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerID="afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" exitCode=0 Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.790524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae"} Jan 30 17:41:48 crc kubenswrapper[4766]: I0130 17:41:48.973448 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098368 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098426 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098488 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098529 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098580 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098622 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098745 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098772 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.098820 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") pod \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\" (UID: \"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099013 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099155 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.099693 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.100116 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.114582 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7" (OuterVolumeSpecName: "kube-api-access-ltxf7") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "kube-api-access-ltxf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.114996 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.117578 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.118714 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4" (OuterVolumeSpecName: "persistence") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "pvc-ba9ce260-411e-465e-825e-cb85f0d828d4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.132604 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.195669 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" (UID: "9e8a2d07-a10c-454f-b5f0-d5fb399de3dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200699 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200762 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") on node \"crc\" " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200781 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200793 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200805 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200816 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200828 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxf7\" (UniqueName: \"kubernetes.io/projected/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-kube-api-access-ltxf7\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.200842 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.219478 4766 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.219613 4766 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ba9ce260-411e-465e-825e-cb85f0d828d4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4") on node "crc" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.268343 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.305142 4766 reconciler_common.go:293] "Volume detached for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406591 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406666 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406792 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406812 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406887 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406904 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406938 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.406966 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407040 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") pod \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\" (UID: \"b9cdb86f-7214-4a3e-818a-dd6936b19daf\") " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407191 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407553 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407728 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.407841 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.410542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk" (OuterVolumeSpecName: "kube-api-access-hj6mk") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "kube-api-access-hj6mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.411050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info" (OuterVolumeSpecName: "pod-info") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.412350 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.419265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8" (OuterVolumeSpecName: "persistence") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.428203 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf" (OuterVolumeSpecName: "server-conf") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.445352 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.489202 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b9cdb86f-7214-4a3e-818a-dd6936b19daf" (UID: "b9cdb86f-7214-4a3e-818a-dd6936b19daf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.492796 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.493018 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" containerID="cri-o://a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" gracePeriod=10 Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509527 4766 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9cdb86f-7214-4a3e-818a-dd6936b19daf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509589 4766 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") on node \"crc\" " Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509603 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509613 4766 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509622 4766 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9cdb86f-7214-4a3e-818a-dd6936b19daf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509630 4766 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9cdb86f-7214-4a3e-818a-dd6936b19daf-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509638 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6mk\" (UniqueName: \"kubernetes.io/projected/b9cdb86f-7214-4a3e-818a-dd6936b19daf-kube-api-access-hj6mk\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.509647 4766 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9cdb86f-7214-4a3e-818a-dd6936b19daf-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.530915 4766 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.531114 4766 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8") on node "crc" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.611309 4766 reconciler_common.go:293] "Volume detached for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798178 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e8a2d07-a10c-454f-b5f0-d5fb399de3dc","Type":"ContainerDied","Data":"b5eea1df367acc4968b8813886b48206984ec921b296ed8c33229a96aaba3238"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798214 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.798252 4766 scope.go:117] "RemoveContainer" containerID="90da3d8ded2aeba6de2be254532a2e4ec6ceb21d77172879f5a52d9cea491e24" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.800706 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9cdb86f-7214-4a3e-818a-dd6936b19daf","Type":"ContainerDied","Data":"a6bfba4c8f09a9b72e6500c3cd5b8a4d9dd328a59974eb580780494c99cc6fcc"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.800768 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.802890 4766 generic.go:334] "Generic (PLEG): container finished" podID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerID="a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" exitCode=0 Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.802948 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb"} Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.816365 4766 scope.go:117] "RemoveContainer" containerID="b675ff1cca1887242f7fe886c969fa2c7a3239d5c0b07658edae799b86b555a7" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.833697 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.841140 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.863561 4766 scope.go:117] "RemoveContainer" containerID="afb3a1a69becdca84d4614986ca161768ac83342e70fd972e16d882fe41cf9ae" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.871695 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.883646 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.888943 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889331 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889353 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889377 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889387 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889406 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889406 4766 scope.go:117] "RemoveContainer" containerID="5121888509f9bb894d32efd5aae0d010bb82beed7fef4e339f209ac41ce7486c" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889414 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="setup-container" Jan 30 17:41:49 crc kubenswrapper[4766]: E0130 17:41:49.889530 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889541 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889811 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.889836 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" containerName="rabbitmq" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.890700 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.892716 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.892932 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.893028 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.893168 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7fqzb" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.894515 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.895238 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.896370 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.897872 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.898855 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.898977 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.899140 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bz89s" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.899263 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.900511 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:49 crc kubenswrapper[4766]: I0130 17:41:49.909821 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017056 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017863 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.017982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018010 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018042 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018080 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018109 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018161 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018256 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018325 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018368 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018425 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.018455 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.051058 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8a2d07-a10c-454f-b5f0-d5fb399de3dc" path="/var/lib/kubelet/pods/9e8a2d07-a10c-454f-b5f0-d5fb399de3dc/volumes" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.052189 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9cdb86f-7214-4a3e-818a-dd6936b19daf" path="/var/lib/kubelet/pods/b9cdb86f-7214-4a3e-818a-dd6936b19daf/volumes" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120059 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120108 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120132 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120158 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120178 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120233 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120268 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120284 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120324 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120343 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120368 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120427 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120468 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.120486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121067 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd0348d-2f44-4961-9503-eb8ce09016d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121745 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.121799 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.122223 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.122623 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b579b360-d367-4637-8bf4-24be247f4daf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.123338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125933 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b579b360-d367-4637-8bf4-24be247f4daf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.125966 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ee0fb1f36d21ee32de31c2c1b35f1f2033c96e9c0c8d1603b6b408ac3d6223f/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.126986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b579b360-d367-4637-8bf4-24be247f4daf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127149 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd0348d-2f44-4961-9503-eb8ce09016d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd0348d-2f44-4961-9503-eb8ce09016d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127763 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.127791 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a67480c1d51246343d54cce22ecd2529a760cf02f3b5a31cca902016f15d50c3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.130801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.142497 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q75lm\" (UniqueName: \"kubernetes.io/projected/b579b360-d367-4637-8bf4-24be247f4daf-kube-api-access-q75lm\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.148252 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkpkm\" (UniqueName: \"kubernetes.io/projected/1fd0348d-2f44-4961-9503-eb8ce09016d8-kube-api-access-nkpkm\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.160445 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba9ce260-411e-465e-825e-cb85f0d828d4\") pod \"rabbitmq-server-0\" (UID: \"b579b360-d367-4637-8bf4-24be247f4daf\") " pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.167392 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b51917bf-6d02-4b8e-98a7-9f99636938a8\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd0348d-2f44-4961-9503-eb8ce09016d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.281281 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.298772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.455748 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526501 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526581 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.526699 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") pod \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\" (UID: \"5ec757c0-9d3d-4d66-9cd8-742105f2c48e\") " Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.530418 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx" (OuterVolumeSpecName: "kube-api-access-dnjtx") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "kube-api-access-dnjtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.558859 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config" (OuterVolumeSpecName: "config") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.559501 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ec757c0-9d3d-4d66-9cd8-742105f2c48e" (UID: "5ec757c0-9d3d-4d66-9cd8-742105f2c48e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628422 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628455 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.628465 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnjtx\" (UniqueName: \"kubernetes.io/projected/5ec757c0-9d3d-4d66-9cd8-742105f2c48e-kube-api-access-dnjtx\") on node \"crc\" DevicePath \"\"" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.751934 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.786859 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:41:50 crc kubenswrapper[4766]: W0130 17:41:50.788853 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd0348d_2f44_4961_9503_eb8ce09016d8.slice/crio-7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58 WatchSource:0}: Error finding container 7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58: Status 404 returned error can't find the container with id 7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58 Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.816752 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"6158fa7c90c40c3905ef3369b347739baa1209eb6794f989832d1a300a02e3de"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.818633 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"7aa758ccaa41a9662023737ab421ac8e9714ab9d4dbe298de7914ad3ec0b6d58"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.822982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" event={"ID":"5ec757c0-9d3d-4d66-9cd8-742105f2c48e","Type":"ContainerDied","Data":"93f3051a2e2fb18e0409a776a8675fba5f3199edbc1b6a3cbce75cefe563e769"} Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.823042 4766 scope.go:117] "RemoveContainer" containerID="a41bb6492a4775abf65f979bb5fa7a9593fae4739f7119a8735ab9ea5cd43dfb" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.823051 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-ht8gm" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.841950 4766 scope.go:117] "RemoveContainer" containerID="987678b0c80e2ab072f159429ab8a830d6004ce03b8e464f8fa8d15fb7f56bd5" Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.853152 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:50 crc kubenswrapper[4766]: I0130 17:41:50.858763 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-ht8gm"] Jan 30 17:41:51 crc kubenswrapper[4766]: I0130 17:41:51.832716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07"} Jan 30 17:41:51 crc kubenswrapper[4766]: I0130 17:41:51.834996 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d"} Jan 30 17:41:52 crc kubenswrapper[4766]: I0130 17:41:52.048261 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" path="/var/lib/kubelet/pods/5ec757c0-9d3d-4d66-9cd8-742105f2c48e/volumes" Jan 30 17:42:09 crc kubenswrapper[4766]: I0130 17:42:09.045272 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:42:09 crc kubenswrapper[4766]: I0130 17:42:09.045770 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.065813 4766 generic.go:334] "Generic (PLEG): container finished" podID="b579b360-d367-4637-8bf4-24be247f4daf" containerID="528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07" exitCode=0 Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.065908 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerDied","Data":"528dc1be45c1fa71884fbf948c0b03035abf8f0497d38922787990286c05fb07"} Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.068497 4766 generic.go:334] "Generic (PLEG): container finished" podID="1fd0348d-2f44-4961-9503-eb8ce09016d8" containerID="63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d" exitCode=0 Jan 30 17:42:24 crc kubenswrapper[4766]: I0130 17:42:24.068541 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerDied","Data":"63aa9db7ce728dc6b379a3a3ae24390eec924085ffc2204e788f8997dce28e2d"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.092292 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b579b360-d367-4637-8bf4-24be247f4daf","Type":"ContainerStarted","Data":"3a9a754e2871aa2dcf9c538d95d0a137d0ee2fca4a3dddf391ff4585dc468eb1"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.092974 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.095975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd0348d-2f44-4961-9503-eb8ce09016d8","Type":"ContainerStarted","Data":"fbbe5bec4b359c72e85fc61bd1c297c0a5b74557b6c30d2687f2232b936a4140"} Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.096217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.123115 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.123086765 podStartE2EDuration="36.123086765s" podCreationTimestamp="2026-01-30 17:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:25.11516212 +0000 UTC m=+4799.753119466" watchObservedRunningTime="2026-01-30 17:42:25.123086765 +0000 UTC m=+4799.761044111" Jan 30 17:42:25 crc kubenswrapper[4766]: I0130 17:42:25.145078 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.145054231 podStartE2EDuration="36.145054231s" podCreationTimestamp="2026-01-30 17:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:25.143479638 +0000 UTC m=+4799.781436984" watchObservedRunningTime="2026-01-30 17:42:25.145054231 +0000 UTC m=+4799.783011577" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.045600 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046124 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046732 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.046799 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" gracePeriod=600 Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206289 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" exitCode=0 Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206374 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c"} Jan 30 17:42:39 crc kubenswrapper[4766]: I0130 17:42:39.206675 4766 scope.go:117] "RemoveContainer" containerID="62b27159543c2d7874b57e155df6cf176eef5367eb321b1133ba7cc2464a1a68" Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.221073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.285506 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:42:40 crc kubenswrapper[4766]: I0130 17:42:40.304265 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.681638 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:50 crc kubenswrapper[4766]: E0130 17:42:50.682428 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682441 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: E0130 17:42:50.682460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="init" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682466 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="init" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.682601 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec757c0-9d3d-4d66-9cd8-742105f2c48e" containerName="dnsmasq-dns" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.683304 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.689069 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.689799 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-slmpt" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.827159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.928743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:50 crc kubenswrapper[4766]: I0130 17:42:50.956341 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"mariadb-client\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " pod="openstack/mariadb-client" Jan 30 17:42:51 crc kubenswrapper[4766]: I0130 17:42:51.001101 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:42:51 crc kubenswrapper[4766]: I0130 17:42:51.507134 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.300603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerStarted","Data":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.301103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerStarted","Data":"d5fca061ae43a81617098f04a8518ad9f8c173148013c0de0c644f6920fe37cb"} Jan 30 17:42:52 crc kubenswrapper[4766]: I0130 17:42:52.324622 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.324591436 podStartE2EDuration="2.324591436s" podCreationTimestamp="2026-01-30 17:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:42:52.315709575 +0000 UTC m=+4826.953666921" watchObservedRunningTime="2026-01-30 17:42:52.324591436 +0000 UTC m=+4826.962548782" Jan 30 17:43:05 crc kubenswrapper[4766]: I0130 17:43:05.607004 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:05 crc kubenswrapper[4766]: I0130 17:43:05.607746 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" containerID="cri-o://79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" gracePeriod=30 Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.043789 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.154109 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") pod \"1129ee55-bf4e-46de-849a-fe2fa0de8181\" (UID: \"1129ee55-bf4e-46de-849a-fe2fa0de8181\") " Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.159584 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr" (OuterVolumeSpecName: "kube-api-access-wczvr") pod "1129ee55-bf4e-46de-849a-fe2fa0de8181" (UID: "1129ee55-bf4e-46de-849a-fe2fa0de8181"). InnerVolumeSpecName "kube-api-access-wczvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.255643 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wczvr\" (UniqueName: \"kubernetes.io/projected/1129ee55-bf4e-46de-849a-fe2fa0de8181-kube-api-access-wczvr\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406046 4766 generic.go:334] "Generic (PLEG): container finished" podID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" exitCode=143 Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406100 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerDied","Data":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406259 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1129ee55-bf4e-46de-849a-fe2fa0de8181","Type":"ContainerDied","Data":"d5fca061ae43a81617098f04a8518ad9f8c173148013c0de0c644f6920fe37cb"} Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.406279 4766 scope.go:117] "RemoveContainer" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.425994 4766 scope.go:117] "RemoveContainer" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: E0130 17:43:06.426396 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": container with ID starting with 79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108 not found: ID does not exist" containerID="79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.426430 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108"} err="failed to get container status \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": rpc error: code = NotFound desc = could not find container \"79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108\": container with ID starting with 79a3b7f4eba22a75e63f5463101291fbd04068411bae413a72681963a67ee108 not found: ID does not exist" Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.446641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:06 crc kubenswrapper[4766]: I0130 17:43:06.454784 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:43:08 crc kubenswrapper[4766]: I0130 17:43:08.047931 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" path="/var/lib/kubelet/pods/1129ee55-bf4e-46de-849a-fe2fa0de8181/volumes" Jan 30 17:43:31 crc kubenswrapper[4766]: I0130 17:43:31.430270 4766 scope.go:117] "RemoveContainer" containerID="3c2bcfb1e73c683e268e22a58c61847b65be47ed0077a6171ee0609e464de262" Jan 30 17:44:39 crc kubenswrapper[4766]: I0130 17:44:39.045620 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:44:39 crc kubenswrapper[4766]: I0130 17:44:39.046646 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.144799 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:00 crc kubenswrapper[4766]: E0130 17:45:00.145785 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.145805 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.145982 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1129ee55-bf4e-46de-849a-fe2fa0de8181" containerName="mariadb-client" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.146593 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.149001 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.149248 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.153127 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.273882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.273948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.274008 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.374789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.374927 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.375049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.375840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.383826 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.394151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"collect-profiles-29496585-mftxm\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.468500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:00 crc kubenswrapper[4766]: I0130 17:45:00.878994 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm"] Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.578975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerStarted","Data":"ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93"} Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.579356 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerStarted","Data":"424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c"} Jan 30 17:45:01 crc kubenswrapper[4766]: I0130 17:45:01.598756 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" podStartSLOduration=1.598735786 podStartE2EDuration="1.598735786s" podCreationTimestamp="2026-01-30 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:45:01.597670906 +0000 UTC m=+4956.235628262" watchObservedRunningTime="2026-01-30 17:45:01.598735786 +0000 UTC m=+4956.236693132" Jan 30 17:45:02 crc kubenswrapper[4766]: I0130 17:45:02.590635 4766 generic.go:334] "Generic (PLEG): container finished" podID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerID="ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93" exitCode=0 Jan 30 17:45:02 crc kubenswrapper[4766]: I0130 17:45:02.590750 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerDied","Data":"ca11ea3447baddfdba3d4121a5ab360e5aad6d36ff04e23e12c6802b7d8b1f93"} Jan 30 17:45:03 crc kubenswrapper[4766]: I0130 17:45:03.881960 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025564 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.025847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") pod \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\" (UID: \"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50\") " Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.026126 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.026475 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.031559 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.031849 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp" (OuterVolumeSpecName: "kube-api-access-7ddbp") pod "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" (UID: "8a7ea96a-39de-4a8a-b0ce-e7778f12fe50"). InnerVolumeSpecName "kube-api-access-7ddbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.128330 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ddbp\" (UniqueName: \"kubernetes.io/projected/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-kube-api-access-7ddbp\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.128366 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a7ea96a-39de-4a8a-b0ce-e7778f12fe50-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" event={"ID":"8a7ea96a-39de-4a8a-b0ce-e7778f12fe50","Type":"ContainerDied","Data":"424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c"} Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607347 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424847fe74f64e214d5fda0e3b977bb63b7a27bbff46d0f731551acb2e88fe4c" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.607377 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-mftxm" Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.666400 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:45:04 crc kubenswrapper[4766]: I0130 17:45:04.672126 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-qfpbr"] Jan 30 17:45:06 crc kubenswrapper[4766]: I0130 17:45:06.048779 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d00d929-3c4f-4555-b75b-a39750dc609b" path="/var/lib/kubelet/pods/3d00d929-3c4f-4555-b75b-a39750dc609b/volumes" Jan 30 17:45:09 crc kubenswrapper[4766]: I0130 17:45:09.045507 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:45:09 crc kubenswrapper[4766]: I0130 17:45:09.046828 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:31 crc kubenswrapper[4766]: I0130 17:45:31.497749 4766 scope.go:117] "RemoveContainer" containerID="d1bbe33187614be0056c390feb3f40bb39d47764bf4e3d7add03326875657c91" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.045596 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046107 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046748 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.046805 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" gracePeriod=600 Jan 30 17:45:39 crc kubenswrapper[4766]: E0130 17:45:39.245014 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908874 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" exitCode=0 Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c"} Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.908979 4766 scope.go:117] "RemoveContainer" containerID="6d73f468d7a4ee2dec8ec549cbfd2340a24d3dd9f72d5b67bcf478d5bc8a9a1c" Jan 30 17:45:39 crc kubenswrapper[4766]: I0130 17:45:39.909615 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:45:39 crc kubenswrapper[4766]: E0130 17:45:39.909905 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:45:52 crc kubenswrapper[4766]: I0130 17:45:52.040028 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:45:52 crc kubenswrapper[4766]: E0130 17:45:52.041108 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:03 crc kubenswrapper[4766]: I0130 17:46:03.039690 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:03 crc kubenswrapper[4766]: E0130 17:46:03.040557 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.606906 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:09 crc kubenswrapper[4766]: E0130 17:46:09.607826 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.607843 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.608080 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7ea96a-39de-4a8a-b0ce-e7778f12fe50" containerName="collect-profiles" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.609693 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.617938 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663480 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663611 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.663765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765418 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765465 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.765998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.766401 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.789266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"community-operators-gq8x4\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:09 crc kubenswrapper[4766]: I0130 17:46:09.930527 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:10 crc kubenswrapper[4766]: I0130 17:46:10.439018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133074 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" exitCode=0 Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133142 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6"} Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.133573 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerStarted","Data":"716bfe1daea03209a9d4d6f8afa485fed91c4531dcf18c6a919290635d33e7c7"} Jan 30 17:46:11 crc kubenswrapper[4766]: I0130 17:46:11.134797 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:46:13 crc kubenswrapper[4766]: I0130 17:46:13.146240 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" exitCode=0 Jan 30 17:46:13 crc kubenswrapper[4766]: I0130 17:46:13.146346 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d"} Jan 30 17:46:14 crc kubenswrapper[4766]: I0130 17:46:14.155267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerStarted","Data":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} Jan 30 17:46:14 crc kubenswrapper[4766]: I0130 17:46:14.171820 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gq8x4" podStartSLOduration=2.543201222 podStartE2EDuration="5.171794871s" podCreationTimestamp="2026-01-30 17:46:09 +0000 UTC" firstStartedPulling="2026-01-30 17:46:11.13457515 +0000 UTC m=+5025.772532496" lastFinishedPulling="2026-01-30 17:46:13.763168799 +0000 UTC m=+5028.401126145" observedRunningTime="2026-01-30 17:46:14.171287698 +0000 UTC m=+5028.809245074" watchObservedRunningTime="2026-01-30 17:46:14.171794871 +0000 UTC m=+5028.809752217" Jan 30 17:46:15 crc kubenswrapper[4766]: I0130 17:46:15.039854 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:15 crc kubenswrapper[4766]: E0130 17:46:15.040768 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.931159 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.931589 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:19 crc kubenswrapper[4766]: I0130 17:46:19.982192 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:20 crc kubenswrapper[4766]: I0130 17:46:20.267751 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:20 crc kubenswrapper[4766]: I0130 17:46:20.322746 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:22 crc kubenswrapper[4766]: I0130 17:46:22.210495 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gq8x4" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" containerID="cri-o://995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" gracePeriod=2 Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.102706 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.219986 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" exitCode=0 Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220038 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq8x4" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220090 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq8x4" event={"ID":"f01a059b-5337-4eba-bc02-106bb2e15da8","Type":"ContainerDied","Data":"716bfe1daea03209a9d4d6f8afa485fed91c4531dcf18c6a919290635d33e7c7"} Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.220110 4766 scope.go:117] "RemoveContainer" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.237707 4766 scope.go:117] "RemoveContainer" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.256596 4766 scope.go:117] "RemoveContainer" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.278854 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.278918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.279101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") pod \"f01a059b-5337-4eba-bc02-106bb2e15da8\" (UID: \"f01a059b-5337-4eba-bc02-106bb2e15da8\") " Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280019 4766 scope.go:117] "RemoveContainer" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280134 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities" (OuterVolumeSpecName: "utilities") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.280865 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": container with ID starting with 995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a not found: ID does not exist" containerID="995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280928 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a"} err="failed to get container status \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": rpc error: code = NotFound desc = could not find container \"995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a\": container with ID starting with 995a48d874b66023868a3a83fd5c367bd546ef22d90f80f8f2b3eae25ae67e0a not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.280954 4766 scope.go:117] "RemoveContainer" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.281553 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": container with ID starting with 0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d not found: ID does not exist" containerID="0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281600 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d"} err="failed to get container status \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": rpc error: code = NotFound desc = could not find container \"0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d\": container with ID starting with 0acecfe5a42117f15e32e19da99617c81a1d07eda945ccdd12d0c58c6617c02d not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281632 4766 scope.go:117] "RemoveContainer" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: E0130 17:46:23.281945 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": container with ID starting with 465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6 not found: ID does not exist" containerID="465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.281976 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6"} err="failed to get container status \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": rpc error: code = NotFound desc = could not find container \"465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6\": container with ID starting with 465c1cb5c8cf7246fde5bcedb2da56cb0f589b3aa6e4017c15ad93dca8961ec6 not found: ID does not exist" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.285522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h" (OuterVolumeSpecName: "kube-api-access-hdb7h") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "kube-api-access-hdb7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.339766 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f01a059b-5337-4eba-bc02-106bb2e15da8" (UID: "f01a059b-5337-4eba-bc02-106bb2e15da8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.380889 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.381423 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f01a059b-5337-4eba-bc02-106bb2e15da8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.381449 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdb7h\" (UniqueName: \"kubernetes.io/projected/f01a059b-5337-4eba-bc02-106bb2e15da8-kube-api-access-hdb7h\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.550814 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:23 crc kubenswrapper[4766]: I0130 17:46:23.556623 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gq8x4"] Jan 30 17:46:24 crc kubenswrapper[4766]: I0130 17:46:24.066752 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" path="/var/lib/kubelet/pods/f01a059b-5337-4eba-bc02-106bb2e15da8/volumes" Jan 30 17:46:29 crc kubenswrapper[4766]: I0130 17:46:29.039739 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:29 crc kubenswrapper[4766]: E0130 17:46:29.041511 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.615955 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.616908 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-content" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.616928 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-content" Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.616960 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.616969 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: E0130 17:46:35.617010 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-utilities" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.617020 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="extract-utilities" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.617255 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01a059b-5337-4eba-bc02-106bb2e15da8" containerName="registry-server" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.618498 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.627252 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.768451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870105 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870193 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870226 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870609 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.870840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.888637 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"redhat-marketplace-vwlnn\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:35 crc kubenswrapper[4766]: I0130 17:46:35.952919 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:37 crc kubenswrapper[4766]: I0130 17:46:37.563818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323582 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" exitCode=0 Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7"} Jan 30 17:46:38 crc kubenswrapper[4766]: I0130 17:46:38.323892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerStarted","Data":"88c9e43d0a3ccbbce3f4bedcd3d0208a41e0cda34902c848ab56a33ffb898e0e"} Jan 30 17:46:40 crc kubenswrapper[4766]: I0130 17:46:40.340781 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" exitCode=0 Jan 30 17:46:40 crc kubenswrapper[4766]: I0130 17:46:40.340870 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941"} Jan 30 17:46:41 crc kubenswrapper[4766]: I0130 17:46:41.351587 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerStarted","Data":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} Jan 30 17:46:41 crc kubenswrapper[4766]: I0130 17:46:41.372534 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwlnn" podStartSLOduration=3.7562011379999998 podStartE2EDuration="6.372510199s" podCreationTimestamp="2026-01-30 17:46:35 +0000 UTC" firstStartedPulling="2026-01-30 17:46:38.325187917 +0000 UTC m=+5052.963145263" lastFinishedPulling="2026-01-30 17:46:40.941496978 +0000 UTC m=+5055.579454324" observedRunningTime="2026-01-30 17:46:41.367453187 +0000 UTC m=+5056.005410543" watchObservedRunningTime="2026-01-30 17:46:41.372510199 +0000 UTC m=+5056.010467535" Jan 30 17:46:42 crc kubenswrapper[4766]: I0130 17:46:42.039618 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:42 crc kubenswrapper[4766]: E0130 17:46:42.039873 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.180318 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.181848 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.191701 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306086 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.306190 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408008 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.408987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.439894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"redhat-operators-fd2sd\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.501160 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:44 crc kubenswrapper[4766]: I0130 17:46:44.985626 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381223 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554" exitCode=0 Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381273 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554"} Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.381297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"b4995231c9fbeb8351b02fbf9273df0d1e9d55dedf024468b645cab4df9fce9a"} Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.953810 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:45 crc kubenswrapper[4766]: I0130 17:46:45.953879 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.001954 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.391127 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c"} Jan 30 17:46:46 crc kubenswrapper[4766]: I0130 17:46:46.443876 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:47 crc kubenswrapper[4766]: I0130 17:46:47.400538 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c" exitCode=0 Jan 30 17:46:47 crc kubenswrapper[4766]: I0130 17:46:47.400746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c"} Jan 30 17:46:48 crc kubenswrapper[4766]: I0130 17:46:48.769068 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:48 crc kubenswrapper[4766]: I0130 17:46:48.769557 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwlnn" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" containerID="cri-o://657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" gracePeriod=2 Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.187818 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.300553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") pod \"867849bc-5872-4cd8-8fb0-45bea0c35457\" (UID: \"867849bc-5872-4cd8-8fb0-45bea0c35457\") " Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.301470 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities" (OuterVolumeSpecName: "utilities") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.310392 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch" (OuterVolumeSpecName: "kube-api-access-4qjch") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "kube-api-access-4qjch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.326788 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "867849bc-5872-4cd8-8fb0-45bea0c35457" (UID: "867849bc-5872-4cd8-8fb0-45bea0c35457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402843 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402913 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867849bc-5872-4cd8-8fb0-45bea0c35457-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.402924 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjch\" (UniqueName: \"kubernetes.io/projected/867849bc-5872-4cd8-8fb0-45bea0c35457-kube-api-access-4qjch\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418230 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwlnn" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418249 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418303 4766 scope.go:117] "RemoveContainer" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418368 4766 generic.go:334] "Generic (PLEG): container finished" podID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" exitCode=0 Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.418441 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwlnn" event={"ID":"867849bc-5872-4cd8-8fb0-45bea0c35457","Type":"ContainerDied","Data":"88c9e43d0a3ccbbce3f4bedcd3d0208a41e0cda34902c848ab56a33ffb898e0e"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.420982 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerStarted","Data":"8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f"} Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.458243 4766 scope.go:117] "RemoveContainer" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.466423 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fd2sd" podStartSLOduration=2.470871439 podStartE2EDuration="5.466398983s" podCreationTimestamp="2026-01-30 17:46:44 +0000 UTC" firstStartedPulling="2026-01-30 17:46:45.382591104 +0000 UTC m=+5060.020548460" lastFinishedPulling="2026-01-30 17:46:48.378118668 +0000 UTC m=+5063.016076004" observedRunningTime="2026-01-30 17:46:49.442735882 +0000 UTC m=+5064.080693238" watchObservedRunningTime="2026-01-30 17:46:49.466398983 +0000 UTC m=+5064.104356319" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.490433 4766 scope.go:117] "RemoveContainer" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.491681 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.500851 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwlnn"] Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.515159 4766 scope.go:117] "RemoveContainer" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.516757 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": container with ID starting with 657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc not found: ID does not exist" containerID="657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.516849 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc"} err="failed to get container status \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": rpc error: code = NotFound desc = could not find container \"657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc\": container with ID starting with 657787fa63953bfadb8e93df0777b82c5d657a1dba7f4eff523a6a3b391864cc not found: ID does not exist" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.516885 4766 scope.go:117] "RemoveContainer" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.517509 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": container with ID starting with e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941 not found: ID does not exist" containerID="e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.517578 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941"} err="failed to get container status \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": rpc error: code = NotFound desc = could not find container \"e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941\": container with ID starting with e4eee46ad6a23d196259f7952049d7c7fd59a3bfb593f8db157da12549b06941 not found: ID does not exist" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.517619 4766 scope.go:117] "RemoveContainer" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: E0130 17:46:49.518294 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": container with ID starting with b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7 not found: ID does not exist" containerID="b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7" Jan 30 17:46:49 crc kubenswrapper[4766]: I0130 17:46:49.518336 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7"} err="failed to get container status \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": rpc error: code = NotFound desc = could not find container \"b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7\": container with ID starting with b3718f4ed626374185830ef27f31957e8032c6e26976f17f2b5931b14efd38a7 not found: ID does not exist" Jan 30 17:46:50 crc kubenswrapper[4766]: I0130 17:46:50.048633 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" path="/var/lib/kubelet/pods/867849bc-5872-4cd8-8fb0-45bea0c35457/volumes" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.502348 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.503021 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:54 crc kubenswrapper[4766]: I0130 17:46:54.557773 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:55 crc kubenswrapper[4766]: I0130 17:46:55.525471 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:55 crc kubenswrapper[4766]: I0130 17:46:55.579162 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:57 crc kubenswrapper[4766]: I0130 17:46:57.039832 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:46:57 crc kubenswrapper[4766]: E0130 17:46:57.040156 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:46:57 crc kubenswrapper[4766]: I0130 17:46:57.483358 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fd2sd" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" containerID="cri-o://8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" gracePeriod=2 Jan 30 17:46:58 crc kubenswrapper[4766]: I0130 17:46:58.492764 4766 generic.go:334] "Generic (PLEG): container finished" podID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerID="8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" exitCode=0 Jan 30 17:46:58 crc kubenswrapper[4766]: I0130 17:46:58.492840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f"} Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.173816 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217611 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217682 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.217727 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") pod \"05e015a3-c2f7-491b-a864-d6f03a8da284\" (UID: \"05e015a3-c2f7-491b-a864-d6f03a8da284\") " Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.218529 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities" (OuterVolumeSpecName: "utilities") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.222629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f" (OuterVolumeSpecName: "kube-api-access-pc49f") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "kube-api-access-pc49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.318925 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.318959 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc49f\" (UniqueName: \"kubernetes.io/projected/05e015a3-c2f7-491b-a864-d6f03a8da284-kube-api-access-pc49f\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.339055 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05e015a3-c2f7-491b-a864-d6f03a8da284" (UID: "05e015a3-c2f7-491b-a864-d6f03a8da284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.420613 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e015a3-c2f7-491b-a864-d6f03a8da284-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502137 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fd2sd" event={"ID":"05e015a3-c2f7-491b-a864-d6f03a8da284","Type":"ContainerDied","Data":"b4995231c9fbeb8351b02fbf9273df0d1e9d55dedf024468b645cab4df9fce9a"} Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502259 4766 scope.go:117] "RemoveContainer" containerID="8b0a3950eeb2f65987ccc1596f817bf2057bd60ced5d79b907e67333850dbc9f" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.502208 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fd2sd" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.529931 4766 scope.go:117] "RemoveContainer" containerID="ac1109ad27e435c6eeb4e27344b7c151cdc5174829db6cab7bdfb5dbaacbc67c" Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.535167 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.547020 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fd2sd"] Jan 30 17:46:59 crc kubenswrapper[4766]: I0130 17:46:59.559045 4766 scope.go:117] "RemoveContainer" containerID="ab485356a29ab995ab7860ac3a3f1cc72df688f8bebad45fdd8e16c5b2e5a554" Jan 30 17:47:00 crc kubenswrapper[4766]: I0130 17:47:00.048070 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" path="/var/lib/kubelet/pods/05e015a3-c2f7-491b-a864-d6f03a8da284/volumes" Jan 30 17:47:09 crc kubenswrapper[4766]: I0130 17:47:09.039856 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:09 crc kubenswrapper[4766]: E0130 17:47:09.040698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.780859 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782001 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782027 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782059 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782071 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782093 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782104 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782123 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782134 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-content" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782165 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782202 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="extract-utilities" Jan 30 17:47:13 crc kubenswrapper[4766]: E0130 17:47:13.782228 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782239 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782477 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e015a3-c2f7-491b-a864-d6f03a8da284" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.782503 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="867849bc-5872-4cd8-8fb0-45bea0c35457" containerName="registry-server" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.783406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.788761 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-slmpt" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.792217 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.946224 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:13 crc kubenswrapper[4766]: I0130 17:47:13.946793 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.048894 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.049066 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.052821 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.053031 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b08e05cbea0c7e1d9c8983a7b751e75758c52c7cc2564acebca783f41c2e762a/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.071492 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw29n\" (UniqueName: \"kubernetes.io/projected/d76c2935-d3e2-401f-bdd0-878e885a5add-kube-api-access-xw29n\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.087991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7deb0965-ceb5-43dc-a5cd-42e162b9ce9a\") pod \"mariadb-copy-data\" (UID: \"d76c2935-d3e2-401f-bdd0-878e885a5add\") " pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.110456 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.422770 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.614314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d76c2935-d3e2-401f-bdd0-878e885a5add","Type":"ContainerStarted","Data":"ac48a323f2edf7b25ffbd740e69d78e74fcc2f09968e3795ce9aeef43039cfbb"} Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.614738 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d76c2935-d3e2-401f-bdd0-878e885a5add","Type":"ContainerStarted","Data":"7ba7e39489ade85a844a534bd0d37887ae31b1c23e85bd5fa5f8f4795e986a2e"} Jan 30 17:47:14 crc kubenswrapper[4766]: I0130 17:47:14.651943 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.651922004 podStartE2EDuration="2.651922004s" podCreationTimestamp="2026-01-30 17:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:47:14.631444757 +0000 UTC m=+5089.269402113" watchObservedRunningTime="2026-01-30 17:47:14.651922004 +0000 UTC m=+5089.289879350" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.383819 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.385265 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.394393 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.499229 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.600741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.622633 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"mariadb-client\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " pod="openstack/mariadb-client" Jan 30 17:47:17 crc kubenswrapper[4766]: I0130 17:47:17.704555 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.116840 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:18 crc kubenswrapper[4766]: W0130 17:47:18.119136 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dea217a_9314_4a8d_8607_a007c861127a.slice/crio-bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1 WatchSource:0}: Error finding container bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1: Status 404 returned error can't find the container with id bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1 Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.638510 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerStarted","Data":"b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39"} Jan 30 17:47:18 crc kubenswrapper[4766]: I0130 17:47:18.638835 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerStarted","Data":"bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1"} Jan 30 17:47:19 crc kubenswrapper[4766]: I0130 17:47:19.650863 4766 generic.go:334] "Generic (PLEG): container finished" podID="8dea217a-9314-4a8d-8607-a007c861127a" containerID="b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39" exitCode=0 Jan 30 17:47:19 crc kubenswrapper[4766]: I0130 17:47:19.650946 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"8dea217a-9314-4a8d-8607-a007c861127a","Type":"ContainerDied","Data":"b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39"} Jan 30 17:47:20 crc kubenswrapper[4766]: I0130 17:47:20.969980 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:20 crc kubenswrapper[4766]: I0130 17:47:20.993489 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_8dea217a-9314-4a8d-8607-a007c861127a/mariadb-client/0.log" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.020250 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.027136 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.049715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") pod \"8dea217a-9314-4a8d-8607-a007c861127a\" (UID: \"8dea217a-9314-4a8d-8607-a007c861127a\") " Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.063033 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb" (OuterVolumeSpecName: "kube-api-access-ffzjb") pod "8dea217a-9314-4a8d-8607-a007c861127a" (UID: "8dea217a-9314-4a8d-8607-a007c861127a"). InnerVolumeSpecName "kube-api-access-ffzjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: E0130 17:47:21.130460 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130478 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.130663 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dea217a-9314-4a8d-8607-a007c861127a" containerName="mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.131119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.148065 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.155124 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffzjb\" (UniqueName: \"kubernetes.io/projected/8dea217a-9314-4a8d-8607-a007c861127a-kube-api-access-ffzjb\") on node \"crc\" DevicePath \"\"" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.256261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.358160 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.375917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"mariadb-client\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.452662 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.669307 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb0f391a7dc07c70b0a13527b5f305f68a28f1bfce23078772d33ec4bf718f1" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.669451 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.697673 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="8dea217a-9314-4a8d-8607-a007c861127a" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" Jan 30 17:47:21 crc kubenswrapper[4766]: W0130 17:47:21.957110 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68174148_c4d4_4f1d_ab10_8372f6dcaeb4.slice/crio-92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10 WatchSource:0}: Error finding container 92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10: Status 404 returned error can't find the container with id 92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10 Jan 30 17:47:21 crc kubenswrapper[4766]: I0130 17:47:21.957623 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.049149 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dea217a-9314-4a8d-8607-a007c861127a" path="/var/lib/kubelet/pods/8dea217a-9314-4a8d-8607-a007c861127a/volumes" Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677862 4766 generic.go:334] "Generic (PLEG): container finished" podID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerID="d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5" exitCode=0 Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677910 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"68174148-c4d4-4f1d-ab10-8372f6dcaeb4","Type":"ContainerDied","Data":"d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5"} Jan 30 17:47:22 crc kubenswrapper[4766]: I0130 17:47:22.677938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"68174148-c4d4-4f1d-ab10-8372f6dcaeb4","Type":"ContainerStarted","Data":"92bf3ceaae7cc23e9b0e853547c441ad451d50b28eb38c70a3a2726a06730d10"} Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.040952 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:23 crc kubenswrapper[4766]: E0130 17:47:23.041155 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.944614 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.963932 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_68174148-c4d4-4f1d-ab10-8372f6dcaeb4/mariadb-client/0.log" Jan 30 17:47:23 crc kubenswrapper[4766]: I0130 17:47:23.993070 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.002311 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.108031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") pod \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\" (UID: \"68174148-c4d4-4f1d-ab10-8372f6dcaeb4\") " Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.114451 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh" (OuterVolumeSpecName: "kube-api-access-b88rh") pod "68174148-c4d4-4f1d-ab10-8372f6dcaeb4" (UID: "68174148-c4d4-4f1d-ab10-8372f6dcaeb4"). InnerVolumeSpecName "kube-api-access-b88rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.210475 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b88rh\" (UniqueName: \"kubernetes.io/projected/68174148-c4d4-4f1d-ab10-8372f6dcaeb4-kube-api-access-b88rh\") on node \"crc\" DevicePath \"\"" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.691846 4766 scope.go:117] "RemoveContainer" containerID="d8504184abdc59d46439aff32e612a0f7f012cb9b67d257b000d3ef0913598c5" Jan 30 17:47:24 crc kubenswrapper[4766]: I0130 17:47:24.692007 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 17:47:26 crc kubenswrapper[4766]: I0130 17:47:26.049633 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" path="/var/lib/kubelet/pods/68174148-c4d4-4f1d-ab10-8372f6dcaeb4/volumes" Jan 30 17:47:31 crc kubenswrapper[4766]: I0130 17:47:31.564975 4766 scope.go:117] "RemoveContainer" containerID="0078600a657ee1591d8d9983657bcc34b477649798d6ae05ffcf66ebeaeaa4a4" Jan 30 17:47:36 crc kubenswrapper[4766]: I0130 17:47:36.043605 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:36 crc kubenswrapper[4766]: E0130 17:47:36.044198 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:47:49 crc kubenswrapper[4766]: I0130 17:47:49.038879 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:47:49 crc kubenswrapper[4766]: E0130 17:47:49.039549 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:01 crc kubenswrapper[4766]: I0130 17:48:01.039797 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:01 crc kubenswrapper[4766]: E0130 17:48:01.040817 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.984662 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:03 crc kubenswrapper[4766]: E0130 17:48:03.985026 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.985045 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.985362 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="68174148-c4d4-4f1d-ab10-8372f6dcaeb4" containerName="mariadb-client" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.986380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.989718 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h4smv" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.989914 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.994739 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:03 crc kubenswrapper[4766]: I0130 17:48:03.996416 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.008259 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.009519 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.021139 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.022487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026750 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026839 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026908 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.026996 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.031483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.063635 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.128999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129258 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129683 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.129849 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.131170 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b16a682c-8a11-4113-82e8-b361a1d8881e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132802 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132950 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.132977 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133011 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133033 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133128 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.133630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.134072 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.134683 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b16a682c-8a11-4113-82e8-b361a1d8881e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.136735 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.136763 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/57bd0c9220c7b3cf0c3fac8a83ec31e9cd3ecf2a08f7ee09f213bf587e64c805/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.139200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16a682c-8a11-4113-82e8-b361a1d8881e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.146908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghqb\" (UniqueName: \"kubernetes.io/projected/b16a682c-8a11-4113-82e8-b361a1d8881e-kube-api-access-hghqb\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.166663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ee35ca4-b7bd-4930-8399-0580ed877e5d\") pod \"ovsdbserver-nb-0\" (UID: \"b16a682c-8a11-4113-82e8-b361a1d8881e\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.176395 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.178117 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186091 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186265 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.186674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fvvb2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.192373 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.221594 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.223168 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.232553 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.234113 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236361 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236412 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236472 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236517 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236544 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236569 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.236631 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240107 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240151 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240232 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240257 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240290 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240341 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240373 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240401 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.240693 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-config\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.242131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-config\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.242765 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b29c551b-31dd-4264-b3f0-04fde1a2529f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.243163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b29c551b-31dd-4264-b3f0-04fde1a2529f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247018 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591e329-01bd-4573-8590-6e3f62bfb187-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247152 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.247257 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27ea08c57b06496e2d93d97b9248d1c8155fdae78f0593fca82f73e37336042a/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.248093 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591e329-01bd-4573-8590-6e3f62bfb187-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.250926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29c551b-31dd-4264-b3f0-04fde1a2529f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.252540 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.256108 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.256149 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8d66a76004ccd7102542b83fa60b6d7731a2eea77eb91c16605bd100f23334a/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.259872 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591e329-01bd-4573-8590-6e3f62bfb187-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.264990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsvp\" (UniqueName: \"kubernetes.io/projected/2591e329-01bd-4573-8590-6e3f62bfb187-kube-api-access-7xsvp\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.269832 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4b49\" (UniqueName: \"kubernetes.io/projected/b29c551b-31dd-4264-b3f0-04fde1a2529f-kube-api-access-d4b49\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.304509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51b64680-3f7a-4288-943b-d0019aa91b8e\") pod \"ovsdbserver-nb-1\" (UID: \"b29c551b-31dd-4264-b3f0-04fde1a2529f\") " pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.310898 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f0cf9e3-1887-4ebc-84a7-4ca1bfdbe2ae\") pod \"ovsdbserver-nb-2\" (UID: \"2591e329-01bd-4573-8590-6e3f62bfb187\") " pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341394 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341463 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341569 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341600 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341634 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341662 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341695 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341734 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341780 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.341842 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.342213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1053f18b-60a9-44c8-84f5-77bc506a83c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.343860 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.344133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1053f18b-60a9-44c8-84f5-77bc506a83c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1053f18b-60a9-44c8-84f5-77bc506a83c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345965 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.345993 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b79da9dd838b1d23c15710aab6ce2b6fb8c619bcc90851891501a8917c282052/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.355367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.359167 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlt7\" (UniqueName: \"kubernetes.io/projected/1053f18b-60a9-44c8-84f5-77bc506a83c1-kube-api-access-xwlt7\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.368572 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.374660 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.383459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a4e1574-cdca-439a-950d-d70cbd1603ae\") pod \"ovsdbserver-sb-0\" (UID: \"1053f18b-60a9-44c8-84f5-77bc506a83c1\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443137 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443226 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443253 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443282 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443403 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443446 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443480 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443702 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.443790 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.444553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76df5ae8-0eeb-4bb5-86ee-1c416397a186-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.447263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-config\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.450731 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76df5ae8-0eeb-4bb5-86ee-1c416397a186-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.455420 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.468022 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fad90c2c72d134ef9ba9a53a0c0b32c3c7c172b59b324139234f8cbee12231bd/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.464215 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twm7h\" (UniqueName: \"kubernetes.io/projected/76df5ae8-0eeb-4bb5-86ee-1c416397a186-kube-api-access-twm7h\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.455657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df5ae8-0eeb-4bb5-86ee-1c416397a186-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.497127 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e2c0f44-0757-48d2-8df4-07b76f444461\") pod \"ovsdbserver-sb-2\" (UID: \"76df5ae8-0eeb-4bb5-86ee-1c416397a186\") " pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.536882 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545204 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545295 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545320 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.545393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-config\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546476 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95b4e121-951b-4c45-a227-1ec8638a2320-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.546558 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95b4e121-951b-4c45-a227-1ec8638a2320-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.553040 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b4e121-951b-4c45-a227-1ec8638a2320-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.549431 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.556323 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/883e66b7a9a89d385eec218336add04608336322f761d687a93ed65b04608b84/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.564722 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7625x\" (UniqueName: \"kubernetes.io/projected/95b4e121-951b-4c45-a227-1ec8638a2320-kube-api-access-7625x\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.582444 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3533d9c0-0b55-4512-a747-7107c7faaaf0\") pod \"ovsdbserver-sb-1\" (UID: \"95b4e121-951b-4c45-a227-1ec8638a2320\") " pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.614308 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.622616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.883017 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.971716 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 17:48:04 crc kubenswrapper[4766]: I0130 17:48:04.984143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"27286e812b18d1e43a8bae8a21c3ece2f203d193ecd85a3a5af8469a9941ce67"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.109804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.114268 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1053f18b_60a9_44c8_84f5_77bc506a83c1.slice/crio-8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867 WatchSource:0}: Error finding container 8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867: Status 404 returned error can't find the container with id 8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.203828 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.210622 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76df5ae8_0eeb_4bb5_86ee_1c416397a186.slice/crio-ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5 WatchSource:0}: Error finding container ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5: Status 404 returned error can't find the container with id ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.292953 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.316724 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b4e121_951b_4c45_a227_1ec8638a2320.slice/crio-46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05 WatchSource:0}: Error finding container 46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05: Status 404 returned error can't find the container with id 46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05 Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.932414 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 17:48:05 crc kubenswrapper[4766]: W0130 17:48:05.932415 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2591e329_01bd_4573_8590_6e3f62bfb187.slice/crio-81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba WatchSource:0}: Error finding container 81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba: Status 404 returned error can't find the container with id 81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994139 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"896dc9b2140cdbb9feca3570d7b30f7f18296bf3abaa007934e72bb64c6f8b1a"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994191 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"6daf5437d8aeddaf3b430297928833b40f729f84da4f3e95f92d0aab3b16b563"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.994203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"95b4e121-951b-4c45-a227-1ec8638a2320","Type":"ContainerStarted","Data":"46caa19fbbdec32d614fe19c7317f54b50fc6460b1a5ac2156b8a52b0da0ff05"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.995632 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"201fe3bb1762cbcd5153a87229856504d7b798d0dd8ff55c10e85ad0f6c744d0"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.995663 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b16a682c-8a11-4113-82e8-b361a1d8881e","Type":"ContainerStarted","Data":"a739b2823646428a167772010d67fbc65e78b9a529222cdff4121f7b89dedda7"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"3835bd209ba13ed52993559938cc3f790f1653c2220bef4c91917bfec829fa7b"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"92427a4b4d027e89307e4fea29d64af553b582cf6f71a3fb1eec67d57f975d98"} Jan 30 17:48:05 crc kubenswrapper[4766]: I0130 17:48:05.999672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1053f18b-60a9-44c8-84f5-77bc506a83c1","Type":"ContainerStarted","Data":"8772dea3993e3d949db36eb27ca1b0725c47feb93d723b13e9964ed56e32d867"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.000636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"81f52adcf79e450f66f880e45e97417d058e4c62aeecc065b46f1698cf28a0ba"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002590 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"40ac9d1bc9dd75d4d0d07b3439b9abd3570631d5f84ca0e2439e890e6564322b"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002643 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"a57bd5da8711287631b67c7bf6f938a00df20d2c92a407cef4cd93aa386b134a"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.002654 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"76df5ae8-0eeb-4bb5-86ee-1c416397a186","Type":"ContainerStarted","Data":"ecae13dba215bf678be8ce6ab83bc96e60d35a77b6550399dbd634cf31a926f5"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"bb1c7bb1537090c6ec37d76442eee16cf11ba09db0137d4250ed20b8aac54faa"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"25d6d8421b6706e8bd39234f0da9fcfe8c25d9a4cd62bd9fe06eea336925eb44"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.004680 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"b29c551b-31dd-4264-b3f0-04fde1a2529f","Type":"ContainerStarted","Data":"937db78a490aacb898bb62aad7d1a63bd31912bdec468f3c8ceb94d72a3e1f56"} Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.043060 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.043037762 podStartE2EDuration="3.043037762s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.01550602 +0000 UTC m=+5140.653463366" watchObservedRunningTime="2026-01-30 17:48:06.043037762 +0000 UTC m=+5140.680995118" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.046341 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.046328288 podStartE2EDuration="4.046328288s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.038387851 +0000 UTC m=+5140.676345217" watchObservedRunningTime="2026-01-30 17:48:06.046328288 +0000 UTC m=+5140.684285644" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.088388 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.088368461 podStartE2EDuration="3.088368461s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.082642761 +0000 UTC m=+5140.720600107" watchObservedRunningTime="2026-01-30 17:48:06.088368461 +0000 UTC m=+5140.726325807" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.094031 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.093986228 podStartE2EDuration="4.093986228s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.06351821 +0000 UTC m=+5140.701475566" watchObservedRunningTime="2026-01-30 17:48:06.093986228 +0000 UTC m=+5140.731943574" Jan 30 17:48:06 crc kubenswrapper[4766]: I0130 17:48:06.191574 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.191551446 podStartE2EDuration="3.191551446s" podCreationTimestamp="2026-01-30 17:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:06.161460947 +0000 UTC m=+5140.799418303" watchObservedRunningTime="2026-01-30 17:48:06.191551446 +0000 UTC m=+5140.829508792" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.014562 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"5c44f5d4d347a50bc0817a695e6ab2e88b01d8e4aa0980d011edffcea3a9eb80"} Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.014630 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"2591e329-01bd-4573-8590-6e3f62bfb187","Type":"ContainerStarted","Data":"1dc982ffcbb9c41c87e94e8a298fae0ee3744121bd0a8d8542f5dc4cc4ba397c"} Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.040838 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.040817854 podStartE2EDuration="5.040817854s" podCreationTimestamp="2026-01-30 17:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:07.034718365 +0000 UTC m=+5141.672675741" watchObservedRunningTime="2026-01-30 17:48:07.040817854 +0000 UTC m=+5141.678775200" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.356080 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.369352 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.375155 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.538274 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.615129 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:07 crc kubenswrapper[4766]: I0130 17:48:07.624031 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.356417 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.369013 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.375708 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.538032 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.614981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:09 crc kubenswrapper[4766]: I0130 17:48:09.623214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.391491 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.404772 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.415439 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.436957 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.454084 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.572532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.626162 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.683618 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.686450 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.742820 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.744802 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.747671 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.751273 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.761680 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.761745 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872206 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872257 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.872298 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.927698 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:10 crc kubenswrapper[4766]: E0130 17:48:10.928315 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-kxwj9 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" podUID="06b62d4e-8988-4983-a956-a96e3c5b055d" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.959082 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.960902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.963485 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.973888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.973996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975382 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.975643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.976309 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.976493 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:10 crc kubenswrapper[4766]: I0130 17:48:10.994308 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"dnsmasq-dns-64848558ff-5rxbn\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.050404 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.063946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076854 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.076952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.077024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.077066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.088986 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177713 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.177935 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") pod \"06b62d4e-8988-4983-a956-a96e3c5b055d\" (UID: \"06b62d4e-8988-4983-a956-a96e3c5b055d\") " Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178140 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config" (OuterVolumeSpecName: "config") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178648 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.178994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.180362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.180897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181092 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181200 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.181824 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9" (OuterVolumeSpecName: "kube-api-access-kxwj9") pod "06b62d4e-8988-4983-a956-a96e3c5b055d" (UID: "06b62d4e-8988-4983-a956-a96e3c5b055d"). InnerVolumeSpecName "kube-api-access-kxwj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.182272 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.182462 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183123 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183138 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.183148 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06b62d4e-8988-4983-a956-a96e3c5b055d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.201029 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"dnsmasq-dns-8fdcd7795-tjgm8\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.284086 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.284994 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwj9\" (UniqueName: \"kubernetes.io/projected/06b62d4e-8988-4983-a956-a96e3c5b055d-kube-api-access-kxwj9\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:11 crc kubenswrapper[4766]: I0130 17:48:11.814363 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.039615 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:12 crc kubenswrapper[4766]: E0130 17:48:12.040164 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.057939 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerStarted","Data":"a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9"} Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.058023 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64848558ff-5rxbn" Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.145611 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:12 crc kubenswrapper[4766]: I0130 17:48:12.154248 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64848558ff-5rxbn"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.069535 4766 generic.go:334] "Generic (PLEG): container finished" podID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerID="1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15" exitCode=0 Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.069636 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15"} Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.751763 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.752824 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.757977 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.759782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879476 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.879495 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980760 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980815 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.980845 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.986099 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.986142 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34fd43cf0e80788d7c16160b1b222ac3f3ff804c8ca8200947eb730686989322/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 17:48:13 crc kubenswrapper[4766]: I0130 17:48:13.990981 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7fb6354d-977f-494f-9a51-0a1b8f48c686-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.003298 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgm66\" (UniqueName: \"kubernetes.io/projected/7fb6354d-977f-494f-9a51-0a1b8f48c686-kube-api-access-hgm66\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.021583 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45de3e47-39b0-4107-8386-9d3706ed6887\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45de3e47-39b0-4107-8386-9d3706ed6887\") pod \"ovn-copy-data\" (UID: \"7fb6354d-977f-494f-9a51-0a1b8f48c686\") " pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.050470 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b62d4e-8988-4983-a956-a96e3c5b055d" path="/var/lib/kubelet/pods/06b62d4e-8988-4983-a956-a96e3c5b055d/volumes" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.073756 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.080119 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerStarted","Data":"d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08"} Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.080391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.101625 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podStartSLOduration=4.101605211 podStartE2EDuration="4.101605211s" podCreationTimestamp="2026-01-30 17:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:14.097584575 +0000 UTC m=+5148.735541931" watchObservedRunningTime="2026-01-30 17:48:14.101605211 +0000 UTC m=+5148.739562547" Jan 30 17:48:14 crc kubenswrapper[4766]: I0130 17:48:14.566571 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 17:48:14 crc kubenswrapper[4766]: W0130 17:48:14.567776 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fb6354d_977f_494f_9a51_0a1b8f48c686.slice/crio-1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591 WatchSource:0}: Error finding container 1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591: Status 404 returned error can't find the container with id 1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591 Jan 30 17:48:15 crc kubenswrapper[4766]: I0130 17:48:15.091652 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7fb6354d-977f-494f-9a51-0a1b8f48c686","Type":"ContainerStarted","Data":"83a1f53cf7d0c4406d5a72249ccb3ade022d371c6d16fde6067b73d61e92f77b"} Jan 30 17:48:15 crc kubenswrapper[4766]: I0130 17:48:15.092029 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7fb6354d-977f-494f-9a51-0a1b8f48c686","Type":"ContainerStarted","Data":"1fbd1f623b6e2fbfe2eb654bdf0ff41b986648ec654bf46f0c5fbbea88637591"} Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.568847 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=7.568818992 podStartE2EDuration="7.568818992s" podCreationTimestamp="2026-01-30 17:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:15.11033535 +0000 UTC m=+5149.748292696" watchObservedRunningTime="2026-01-30 17:48:19.568818992 +0000 UTC m=+5154.206776338" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.574796 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.576440 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wxzgn" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581712 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.581931 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.589257 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675760 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.675999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777470 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.777521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778563 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-config\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778754 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9743ed16-7558-435e-9f72-3688bd1102d7-scripts\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.778990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9743ed16-7558-435e-9f72-3688bd1102d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.790909 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9743ed16-7558-435e-9f72-3688bd1102d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.795437 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9h7\" (UniqueName: \"kubernetes.io/projected/9743ed16-7558-435e-9f72-3688bd1102d7-kube-api-access-9n9h7\") pod \"ovn-northd-0\" (UID: \"9743ed16-7558-435e-9f72-3688bd1102d7\") " pod="openstack/ovn-northd-0" Jan 30 17:48:19 crc kubenswrapper[4766]: I0130 17:48:19.905657 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:48:20 crc kubenswrapper[4766]: I0130 17:48:20.353967 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:48:20 crc kubenswrapper[4766]: W0130 17:48:20.357281 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9743ed16_7558_435e_9f72_3688bd1102d7.slice/crio-04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc WatchSource:0}: Error finding container 04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc: Status 404 returned error can't find the container with id 04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.142679 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"f313b7a60eb3c33f8accb6f37a6bc487347211382a4d46df5c79886df8cdf21a"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.142989 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"68560760de42cca4a7da438368fba192e806f9f333ff93aa469770b214518a36"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.143003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9743ed16-7558-435e-9f72-3688bd1102d7","Type":"ContainerStarted","Data":"04eb7d290fcab78c4050b193513ebd5dba4255a5c372aa0799e7aea4dd7b98cc"} Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.143040 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.166359 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.16633491 podStartE2EDuration="2.16633491s" podCreationTimestamp="2026-01-30 17:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:21.157877508 +0000 UTC m=+5155.795834854" watchObservedRunningTime="2026-01-30 17:48:21.16633491 +0000 UTC m=+5155.804292256" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.286532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.339469 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.339722 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" containerID="cri-o://64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" gracePeriod=10 Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.819367 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915365 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.915426 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") pod \"83b52c39-5b23-4e74-abf9-0018a54b215e\" (UID: \"83b52c39-5b23-4e74-abf9-0018a54b215e\") " Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.920490 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6" (OuterVolumeSpecName: "kube-api-access-dz2t6") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "kube-api-access-dz2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.956417 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:21 crc kubenswrapper[4766]: I0130 17:48:21.957214 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config" (OuterVolumeSpecName: "config") pod "83b52c39-5b23-4e74-abf9-0018a54b215e" (UID: "83b52c39-5b23-4e74-abf9-0018a54b215e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017446 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017484 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz2t6\" (UniqueName: \"kubernetes.io/projected/83b52c39-5b23-4e74-abf9-0018a54b215e-kube-api-access-dz2t6\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.017494 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b52c39-5b23-4e74-abf9-0018a54b215e-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.154944 4766 generic.go:334] "Generic (PLEG): container finished" podID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" exitCode=0 Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155072 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155087 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-pmwzk" event={"ID":"83b52c39-5b23-4e74-abf9-0018a54b215e","Type":"ContainerDied","Data":"413a11896bba6c856744f800c01e207dabe5ad018e6db2441e865aa1619f4199"} Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.155236 4766 scope.go:117] "RemoveContainer" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.178677 4766 scope.go:117] "RemoveContainer" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.180028 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.186796 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-pmwzk"] Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.198631 4766 scope.go:117] "RemoveContainer" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: E0130 17:48:22.199534 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": container with ID starting with 64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee not found: ID does not exist" containerID="64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.199589 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee"} err="failed to get container status \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": rpc error: code = NotFound desc = could not find container \"64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee\": container with ID starting with 64dac631aecc3debe44408ebe734ca2389ee4d9a76ae4ea3913d0c3e70670dee not found: ID does not exist" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.199625 4766 scope.go:117] "RemoveContainer" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: E0130 17:48:22.200036 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": container with ID starting with 71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89 not found: ID does not exist" containerID="71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89" Jan 30 17:48:22 crc kubenswrapper[4766]: I0130 17:48:22.200094 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89"} err="failed to get container status \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": rpc error: code = NotFound desc = could not find container \"71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89\": container with ID starting with 71ad7de0b7e47b4f7b8f41c3804d227e46571979ff8bb171580d7c01a4182d89 not found: ID does not exist" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.048890 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" path="/var/lib/kubelet/pods/83b52c39-5b23-4e74-abf9-0018a54b215e/volumes" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.201941 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:24 crc kubenswrapper[4766]: E0130 17:48:24.202299 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202315 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: E0130 17:48:24.202330 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="init" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202336 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="init" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202480 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b52c39-5b23-4e74-abf9-0018a54b215e" containerName="dnsmasq-dns" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.202995 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.213949 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.298672 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.299717 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.306568 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.311171 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.357431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.357488 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458637 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458697 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458768 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.458802 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.459940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.474774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"keystone-db-create-q5td7\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.519104 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.560650 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.560729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.561708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.579783 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"keystone-7780-account-create-update-96kcq\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:24 crc kubenswrapper[4766]: I0130 17:48:24.615448 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.016744 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.091039 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:48:25 crc kubenswrapper[4766]: W0130 17:48:25.092725 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d0580a7_5f19_4aa4_893f_106812b15326.slice/crio-5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528 WatchSource:0}: Error finding container 5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528: Status 404 returned error can't find the container with id 5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528 Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.178505 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerStarted","Data":"5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.180024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerStarted","Data":"3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.180051 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerStarted","Data":"92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e"} Jan 30 17:48:25 crc kubenswrapper[4766]: I0130 17:48:25.202734 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-q5td7" podStartSLOduration=1.202711185 podStartE2EDuration="1.202711185s" podCreationTimestamp="2026-01-30 17:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:25.195081235 +0000 UTC m=+5159.833038611" watchObservedRunningTime="2026-01-30 17:48:25.202711185 +0000 UTC m=+5159.840668531" Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.190044 4766 generic.go:334] "Generic (PLEG): container finished" podID="9d0580a7-5f19-4aa4-893f-106812b15326" containerID="869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798" exitCode=0 Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.190152 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerDied","Data":"869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798"} Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.194002 4766 generic.go:334] "Generic (PLEG): container finished" podID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerID="3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c" exitCode=0 Jan 30 17:48:26 crc kubenswrapper[4766]: I0130 17:48:26.194049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerDied","Data":"3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c"} Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.039894 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:27 crc kubenswrapper[4766]: E0130 17:48:27.040228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.597560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.610444 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.713800 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") pod \"e09e2e76-7c0b-4efa-b226-18df0a512567\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.713889 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") pod \"9d0580a7-5f19-4aa4-893f-106812b15326\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.714004 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") pod \"e09e2e76-7c0b-4efa-b226-18df0a512567\" (UID: \"e09e2e76-7c0b-4efa-b226-18df0a512567\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.714037 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") pod \"9d0580a7-5f19-4aa4-893f-106812b15326\" (UID: \"9d0580a7-5f19-4aa4-893f-106812b15326\") " Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.715081 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d0580a7-5f19-4aa4-893f-106812b15326" (UID: "9d0580a7-5f19-4aa4-893f-106812b15326"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.715108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e09e2e76-7c0b-4efa-b226-18df0a512567" (UID: "e09e2e76-7c0b-4efa-b226-18df0a512567"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.721097 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr" (OuterVolumeSpecName: "kube-api-access-jwzfr") pod "e09e2e76-7c0b-4efa-b226-18df0a512567" (UID: "e09e2e76-7c0b-4efa-b226-18df0a512567"). InnerVolumeSpecName "kube-api-access-jwzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.723296 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6" (OuterVolumeSpecName: "kube-api-access-9rjd6") pod "9d0580a7-5f19-4aa4-893f-106812b15326" (UID: "9d0580a7-5f19-4aa4-893f-106812b15326"). InnerVolumeSpecName "kube-api-access-9rjd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815393 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e09e2e76-7c0b-4efa-b226-18df0a512567-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815420 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d0580a7-5f19-4aa4-893f-106812b15326-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815431 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwzfr\" (UniqueName: \"kubernetes.io/projected/e09e2e76-7c0b-4efa-b226-18df0a512567-kube-api-access-jwzfr\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:27 crc kubenswrapper[4766]: I0130 17:48:27.815442 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjd6\" (UniqueName: \"kubernetes.io/projected/9d0580a7-5f19-4aa4-893f-106812b15326-kube-api-access-9rjd6\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210367 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q5td7" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q5td7" event={"ID":"e09e2e76-7c0b-4efa-b226-18df0a512567","Type":"ContainerDied","Data":"92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e"} Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.210499 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a6012a11fcd5aa262360bec683c731ecf508b92807da3cdc67df994d81261e" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212745 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7780-account-create-update-96kcq" event={"ID":"9d0580a7-5f19-4aa4-893f-106812b15326","Type":"ContainerDied","Data":"5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528"} Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212786 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3adb3a466ec48c63d358c60a04d8af3697680f93cd2029491e6ee4b314f528" Jan 30 17:48:28 crc kubenswrapper[4766]: I0130 17:48:28.212844 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7780-account-create-update-96kcq" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721559 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:29 crc kubenswrapper[4766]: E0130 17:48:29.721924 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721937 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: E0130 17:48:29.721959 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.721965 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.722101 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" containerName="mariadb-account-create-update" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.722118 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" containerName="mariadb-database-create" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.723020 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.724797 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725276 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725473 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.725564 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.746771 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.846819 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.847020 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.847196 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949136 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.949300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.955135 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.955675 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:29 crc kubenswrapper[4766]: I0130 17:48:29.970606 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"keystone-db-sync-6hlg5\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:30 crc kubenswrapper[4766]: I0130 17:48:30.038989 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:30 crc kubenswrapper[4766]: I0130 17:48:30.496046 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:48:30 crc kubenswrapper[4766]: W0130 17:48:30.501028 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a04cef9_eaad_4fba_9aa9_0f15ed426885.slice/crio-797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a WatchSource:0}: Error finding container 797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a: Status 404 returned error can't find the container with id 797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.246315 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerStarted","Data":"a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a"} Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.246363 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerStarted","Data":"797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a"} Jan 30 17:48:31 crc kubenswrapper[4766]: I0130 17:48:31.267997 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6hlg5" podStartSLOduration=2.267978378 podStartE2EDuration="2.267978378s" podCreationTimestamp="2026-01-30 17:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:31.266734735 +0000 UTC m=+5165.904692101" watchObservedRunningTime="2026-01-30 17:48:31.267978378 +0000 UTC m=+5165.905935724" Jan 30 17:48:33 crc kubenswrapper[4766]: I0130 17:48:33.263748 4766 generic.go:334] "Generic (PLEG): container finished" podID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerID="a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a" exitCode=0 Jan 30 17:48:33 crc kubenswrapper[4766]: I0130 17:48:33.263844 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerDied","Data":"a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a"} Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.619379 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732036 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732189 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.732301 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") pod \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\" (UID: \"7a04cef9-eaad-4fba-9aa9-0f15ed426885\") " Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.737542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj" (OuterVolumeSpecName: "kube-api-access-xsdwj") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "kube-api-access-xsdwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.757650 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.769545 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data" (OuterVolumeSpecName: "config-data") pod "7a04cef9-eaad-4fba-9aa9-0f15ed426885" (UID: "7a04cef9-eaad-4fba-9aa9-0f15ed426885"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834425 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsdwj\" (UniqueName: \"kubernetes.io/projected/7a04cef9-eaad-4fba-9aa9-0f15ed426885-kube-api-access-xsdwj\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834462 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:34 crc kubenswrapper[4766]: I0130 17:48:34.834472 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a04cef9-eaad-4fba-9aa9-0f15ed426885-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281776 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6hlg5" event={"ID":"7a04cef9-eaad-4fba-9aa9-0f15ed426885","Type":"ContainerDied","Data":"797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a"} Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281817 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797efd2eb388ecb77489cbf9f956dc11307b33e66dbf00d7ce49fc1b68049e3a" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.281819 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6hlg5" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.574690 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:35 crc kubenswrapper[4766]: E0130 17:48:35.575104 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.575118 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.575410 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" containerName="keystone-db-sync" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.576620 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.584365 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.585500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.593741 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.594250 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.595908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.596534 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.596829 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.597044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.607053 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677166 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677242 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677279 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677411 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677442 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677510 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.677888 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779722 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779793 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779824 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779926 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779960 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.779989 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.780021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.780052 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781003 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781515 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.781610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.782549 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.786205 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.786613 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.787244 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.789502 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.796006 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.804616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"dnsmasq-dns-77f4494f49-kmx27\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.812728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"keystone-bootstrap-xhm6m\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.903810 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:35 crc kubenswrapper[4766]: I0130 17:48:35.912561 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:36 crc kubenswrapper[4766]: I0130 17:48:36.428155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:36 crc kubenswrapper[4766]: W0130 17:48:36.436667 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83eadd27_65d9_4d4b_aa94_e58a77793239.slice/crio-6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40 WatchSource:0}: Error finding container 6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40: Status 404 returned error can't find the container with id 6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40 Jan 30 17:48:36 crc kubenswrapper[4766]: I0130 17:48:36.504906 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:48:36 crc kubenswrapper[4766]: W0130 17:48:36.515852 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd56c4a02_3a71_44af_b4e3_c01fdfe94aa2.slice/crio-8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743 WatchSource:0}: Error finding container 8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743: Status 404 returned error can't find the container with id 8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743 Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.317500 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerStarted","Data":"39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.317791 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerStarted","Data":"6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319454 4766 generic.go:334] "Generic (PLEG): container finished" podID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerID="221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633" exitCode=0 Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.319504 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerStarted","Data":"8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743"} Jan 30 17:48:37 crc kubenswrapper[4766]: I0130 17:48:37.341388 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xhm6m" podStartSLOduration=2.341370844 podStartE2EDuration="2.341370844s" podCreationTimestamp="2026-01-30 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:37.338278823 +0000 UTC m=+5171.976236159" watchObservedRunningTime="2026-01-30 17:48:37.341370844 +0000 UTC m=+5171.979328190" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.041204 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:38 crc kubenswrapper[4766]: E0130 17:48:38.041775 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.330045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerStarted","Data":"0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0"} Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.330345 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:38 crc kubenswrapper[4766]: I0130 17:48:38.358090 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" podStartSLOduration=3.358073362 podStartE2EDuration="3.358073362s" podCreationTimestamp="2026-01-30 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:38.352480006 +0000 UTC m=+5172.990437352" watchObservedRunningTime="2026-01-30 17:48:38.358073362 +0000 UTC m=+5172.996030708" Jan 30 17:48:39 crc kubenswrapper[4766]: I0130 17:48:39.956911 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 17:48:40 crc kubenswrapper[4766]: I0130 17:48:40.350614 4766 generic.go:334] "Generic (PLEG): container finished" podID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerID="39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716" exitCode=0 Jan 30 17:48:40 crc kubenswrapper[4766]: I0130 17:48:40.350665 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerDied","Data":"39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716"} Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.707054 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781265 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781783 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781885 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.781990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") pod \"83eadd27-65d9-4d4b-aa94-e58a77793239\" (UID: \"83eadd27-65d9-4d4b-aa94-e58a77793239\") " Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.786762 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts" (OuterVolumeSpecName: "scripts") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.786914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.787742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.787773 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4" (OuterVolumeSpecName: "kube-api-access-862w4") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "kube-api-access-862w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.805570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.806605 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data" (OuterVolumeSpecName: "config-data") pod "83eadd27-65d9-4d4b-aa94-e58a77793239" (UID: "83eadd27-65d9-4d4b-aa94-e58a77793239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884226 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862w4\" (UniqueName: \"kubernetes.io/projected/83eadd27-65d9-4d4b-aa94-e58a77793239-kube-api-access-862w4\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884555 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884629 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884683 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884741 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:41 crc kubenswrapper[4766]: I0130 17:48:41.884794 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83eadd27-65d9-4d4b-aa94-e58a77793239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhm6m" event={"ID":"83eadd27-65d9-4d4b-aa94-e58a77793239","Type":"ContainerDied","Data":"6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40"} Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367778 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6400380770eb3831da9c27d15183138e14280fb5e142a3214e48b17d35052a40" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.367850 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhm6m" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.424590 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.430004 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xhm6m"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.546007 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:42 crc kubenswrapper[4766]: E0130 17:48:42.546866 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.546955 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.547488 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" containerName="keystone-bootstrap" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.548424 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.551999 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.552612 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.552879 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.553094 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.555448 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.555545 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.700974 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701046 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701659 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.701753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803161 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803271 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803334 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.803611 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807660 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.807831 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.808486 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.808532 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.819741 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"keystone-bootstrap-zr744\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:42 crc kubenswrapper[4766]: I0130 17:48:42.864580 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:43 crc kubenswrapper[4766]: I0130 17:48:43.255401 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:48:43 crc kubenswrapper[4766]: W0130 17:48:43.258221 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c267d58_0d99_463b_9011_34118e7f961a.slice/crio-69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41 WatchSource:0}: Error finding container 69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41: Status 404 returned error can't find the container with id 69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41 Jan 30 17:48:43 crc kubenswrapper[4766]: I0130 17:48:43.376465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerStarted","Data":"69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41"} Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.052035 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83eadd27-65d9-4d4b-aa94-e58a77793239" path="/var/lib/kubelet/pods/83eadd27-65d9-4d4b-aa94-e58a77793239/volumes" Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.392979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerStarted","Data":"bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa"} Jan 30 17:48:44 crc kubenswrapper[4766]: I0130 17:48:44.423541 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zr744" podStartSLOduration=2.42351781 podStartE2EDuration="2.42351781s" podCreationTimestamp="2026-01-30 17:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:44.418707314 +0000 UTC m=+5179.056664680" watchObservedRunningTime="2026-01-30 17:48:44.42351781 +0000 UTC m=+5179.061475156" Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.905395 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.961931 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:45 crc kubenswrapper[4766]: I0130 17:48:45.962234 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" containerID="cri-o://d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" gracePeriod=10 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.424920 4766 generic.go:334] "Generic (PLEG): container finished" podID="9c267d58-0d99-463b-9011-34118e7f961a" containerID="bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa" exitCode=0 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.425031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerDied","Data":"bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442808 4766 generic.go:334] "Generic (PLEG): container finished" podID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerID="d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" exitCode=0 Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442854 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" event={"ID":"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d","Type":"ContainerDied","Data":"a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9"} Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.442896 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a807fa870f0e90a7991e2ca2af75e1355936893f5199ae4f636d635b578f5ca9" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.463480 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568229 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568274 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568398 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.568475 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") pod \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\" (UID: \"4ddf7416-cc27-4f15-9843-4ef68d7d4b1d\") " Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.586227 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k" (OuterVolumeSpecName: "kube-api-access-7qp4k") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "kube-api-access-7qp4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.615364 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config" (OuterVolumeSpecName: "config") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.616556 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.617472 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.619346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" (UID: "4ddf7416-cc27-4f15-9843-4ef68d7d4b1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670767 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670810 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670821 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670835 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qp4k\" (UniqueName: \"kubernetes.io/projected/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-kube-api-access-7qp4k\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:46 crc kubenswrapper[4766]: I0130 17:48:46.670847 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.449000 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.483808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.489913 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fdcd7795-tjgm8"] Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.789202 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887102 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887250 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887323 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887391 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.887472 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") pod \"9c267d58-0d99-463b-9011-34118e7f961a\" (UID: \"9c267d58-0d99-463b-9011-34118e7f961a\") " Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890657 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890778 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.890942 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts" (OuterVolumeSpecName: "scripts") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.891345 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb" (OuterVolumeSpecName: "kube-api-access-g7jjb") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "kube-api-access-g7jjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.909482 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.911461 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data" (OuterVolumeSpecName: "config-data") pod "9c267d58-0d99-463b-9011-34118e7f961a" (UID: "9c267d58-0d99-463b-9011-34118e7f961a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988921 4766 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988952 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7jjb\" (UniqueName: \"kubernetes.io/projected/9c267d58-0d99-463b-9011-34118e7f961a-kube-api-access-g7jjb\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988963 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988971 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988979 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:47 crc kubenswrapper[4766]: I0130 17:48:47.988987 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c267d58-0d99-463b-9011-34118e7f961a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.051072 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" path="/var/lib/kubelet/pods/4ddf7416-cc27-4f15-9843-4ef68d7d4b1d/volumes" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456498 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zr744" event={"ID":"9c267d58-0d99-463b-9011-34118e7f961a","Type":"ContainerDied","Data":"69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41"} Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456539 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69365c38c9e87ab8654abea3ad45813c59ecd6ceabb392f68d9ea51a6183da41" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.456559 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zr744" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527236 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527621 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527643 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527658 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="init" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527666 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="init" Jan 30 17:48:48 crc kubenswrapper[4766]: E0130 17:48:48.527679 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527686 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527843 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c267d58-0d99-463b-9011-34118e7f961a" containerName="keystone-bootstrap" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.527860 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.528406 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531814 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zq5b" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531863 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.531896 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.532335 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.538900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599336 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599712 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599765 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599789 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.599813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700849 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700917 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.700967 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.701053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706250 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-scripts\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706373 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-credential-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706576 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-config-data\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.706597 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-combined-ca-bundle\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.707118 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2175d86-a673-4c75-9344-d410bff4770a-fernet-keys\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.722996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84gn\" (UniqueName: \"kubernetes.io/projected/d2175d86-a673-4c75-9344-d410bff4770a-kube-api-access-b84gn\") pod \"keystone-d9bc78c74-tqx5h\" (UID: \"d2175d86-a673-4c75-9344-d410bff4770a\") " pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:48 crc kubenswrapper[4766]: I0130 17:48:48.872146 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:49 crc kubenswrapper[4766]: I0130 17:48:49.307411 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9bc78c74-tqx5h"] Jan 30 17:48:49 crc kubenswrapper[4766]: W0130 17:48:49.315552 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2175d86_a673_4c75_9344_d410bff4770a.slice/crio-37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923 WatchSource:0}: Error finding container 37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923: Status 404 returned error can't find the container with id 37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923 Jan 30 17:48:49 crc kubenswrapper[4766]: I0130 17:48:49.479608 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9bc78c74-tqx5h" event={"ID":"d2175d86-a673-4c75-9344-d410bff4770a","Type":"ContainerStarted","Data":"37ffe040a9fb5c8ce2316939bb28a6d294efe0c3ccd57bef362dcb3722f85923"} Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.488904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9bc78c74-tqx5h" event={"ID":"d2175d86-a673-4c75-9344-d410bff4770a","Type":"ContainerStarted","Data":"a3a9e271e5adcc9216346b37d04fe08b89775cb7254ad09c6fcfddb496f06d4c"} Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.489324 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:48:50 crc kubenswrapper[4766]: I0130 17:48:50.509087 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d9bc78c74-tqx5h" podStartSLOduration=2.509050274 podStartE2EDuration="2.509050274s" podCreationTimestamp="2026-01-30 17:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:48:50.505414109 +0000 UTC m=+5185.143371455" watchObservedRunningTime="2026-01-30 17:48:50.509050274 +0000 UTC m=+5185.147007620" Jan 30 17:48:51 crc kubenswrapper[4766]: I0130 17:48:51.285043 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8fdcd7795-tjgm8" podUID="4ddf7416-cc27-4f15-9843-4ef68d7d4b1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.8:5353: i/o timeout" Jan 30 17:48:52 crc kubenswrapper[4766]: I0130 17:48:52.040032 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:48:52 crc kubenswrapper[4766]: E0130 17:48:52.040292 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.244357 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.246816 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.255900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359579 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359647 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.359676 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.461762 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.483756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.483870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.489708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"certified-operators-z4k4s\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:03 crc kubenswrapper[4766]: I0130 17:49:03.784541 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.257966 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:04 crc kubenswrapper[4766]: W0130 17:49:04.267351 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb4fd9d_0f69_412a_80ee_5ae509d9fff7.slice/crio-9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3 WatchSource:0}: Error finding container 9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3: Status 404 returned error can't find the container with id 9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3 Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597135 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" exitCode=0 Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597218 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364"} Jan 30 17:49:04 crc kubenswrapper[4766]: I0130 17:49:04.597425 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerStarted","Data":"9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3"} Jan 30 17:49:06 crc kubenswrapper[4766]: I0130 17:49:06.613398 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" exitCode=0 Jan 30 17:49:06 crc kubenswrapper[4766]: I0130 17:49:06.613478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a"} Jan 30 17:49:07 crc kubenswrapper[4766]: I0130 17:49:07.039114 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:07 crc kubenswrapper[4766]: E0130 17:49:07.039731 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:08 crc kubenswrapper[4766]: I0130 17:49:08.642585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerStarted","Data":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} Jan 30 17:49:08 crc kubenswrapper[4766]: I0130 17:49:08.668056 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4k4s" podStartSLOduration=2.823199805 podStartE2EDuration="5.668032887s" podCreationTimestamp="2026-01-30 17:49:03 +0000 UTC" firstStartedPulling="2026-01-30 17:49:04.598682718 +0000 UTC m=+5199.236640064" lastFinishedPulling="2026-01-30 17:49:07.44351578 +0000 UTC m=+5202.081473146" observedRunningTime="2026-01-30 17:49:08.666862206 +0000 UTC m=+5203.304819562" watchObservedRunningTime="2026-01-30 17:49:08.668032887 +0000 UTC m=+5203.305990233" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.785524 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.785872 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:13 crc kubenswrapper[4766]: I0130 17:49:13.840657 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:14 crc kubenswrapper[4766]: I0130 17:49:14.733082 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:14 crc kubenswrapper[4766]: I0130 17:49:14.788662 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:16 crc kubenswrapper[4766]: I0130 17:49:16.702605 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4k4s" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" containerID="cri-o://4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" gracePeriod=2 Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.618400 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710185 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710411 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.710467 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") pod \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\" (UID: \"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7\") " Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.711382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities" (OuterVolumeSpecName: "utilities") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713476 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" exitCode=0 Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4s" event={"ID":"8cb4fd9d-0f69-412a-80ee-5ae509d9fff7","Type":"ContainerDied","Data":"9c893eee3aea14a4d7afe327a1498b0ddf2e526362f273c73c0fef20008a2bf3"} Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713581 4766 scope.go:117] "RemoveContainer" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.713737 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4s" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.717930 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj" (OuterVolumeSpecName: "kube-api-access-dv8dj") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "kube-api-access-dv8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.760119 4766 scope.go:117] "RemoveContainer" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.763240 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" (UID: "8cb4fd9d-0f69-412a-80ee-5ae509d9fff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.779941 4766 scope.go:117] "RemoveContainer" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.811283 4766 scope.go:117] "RemoveContainer" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.811800 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": container with ID starting with 4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291 not found: ID does not exist" containerID="4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291"} err="failed to get container status \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": rpc error: code = NotFound desc = could not find container \"4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291\": container with ID starting with 4aa7ffd12e9eb98c634df2bdec484daa37abfb9ecc0df4d07814293b7ff68291 not found: ID does not exist" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812125 4766 scope.go:117] "RemoveContainer" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.812571 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": container with ID starting with 3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a not found: ID does not exist" containerID="3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812659 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a"} err="failed to get container status \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": rpc error: code = NotFound desc = could not find container \"3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a\": container with ID starting with 3b217b7b05879b9b34a79438b996c8a047d6426ba1855d5c839e842921d2329a not found: ID does not exist" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812692 4766 scope.go:117] "RemoveContainer" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812881 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8dj\" (UniqueName: \"kubernetes.io/projected/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-kube-api-access-dv8dj\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812901 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.812912 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:49:17 crc kubenswrapper[4766]: E0130 17:49:17.813315 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": container with ID starting with 142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364 not found: ID does not exist" containerID="142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364" Jan 30 17:49:17 crc kubenswrapper[4766]: I0130 17:49:17.813337 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364"} err="failed to get container status \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": rpc error: code = NotFound desc = could not find container \"142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364\": container with ID starting with 142030d434abf493fad5a17f0cf9f31ace527337f988319dac06f8a3c899f364 not found: ID does not exist" Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.044473 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:18 crc kubenswrapper[4766]: E0130 17:49:18.044684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.060827 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:18 crc kubenswrapper[4766]: I0130 17:49:18.061737 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4k4s"] Jan 30 17:49:20 crc kubenswrapper[4766]: I0130 17:49:20.048573 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" path="/var/lib/kubelet/pods/8cb4fd9d-0f69-412a-80ee-5ae509d9fff7/volumes" Jan 30 17:49:20 crc kubenswrapper[4766]: I0130 17:49:20.315293 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d9bc78c74-tqx5h" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.229956 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230820 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-utilities" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230836 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-utilities" Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230862 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230870 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: E0130 17:49:24.230892 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-content" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.230902 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="extract-content" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.231084 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb4fd9d-0f69-412a-80ee-5ae509d9fff7" containerName="registry-server" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.231718 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.240474 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.240696 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.241308 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5thlv" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.245682 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364329 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.364452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466629 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466710 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.466771 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.468009 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.472620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.483246 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"openstackclient\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.549349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:49:24 crc kubenswrapper[4766]: I0130 17:49:24.962569 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:49:24 crc kubenswrapper[4766]: W0130 17:49:24.968635 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0b97605_5664_4ae7_a15d_26b0ae7b4614.slice/crio-c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982 WatchSource:0}: Error finding container c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982: Status 404 returned error can't find the container with id c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982 Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.837361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b97605-5664-4ae7-a15d-26b0ae7b4614","Type":"ContainerStarted","Data":"4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd"} Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.837646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b97605-5664-4ae7-a15d-26b0ae7b4614","Type":"ContainerStarted","Data":"c318f8e40c26125c8e3b00c8d69461bd6e8e95d7a572dd588629d5d36bd8f982"} Jan 30 17:49:25 crc kubenswrapper[4766]: I0130 17:49:25.857638 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.857615692 podStartE2EDuration="1.857615692s" podCreationTimestamp="2026-01-30 17:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:49:25.854533171 +0000 UTC m=+5220.492490517" watchObservedRunningTime="2026-01-30 17:49:25.857615692 +0000 UTC m=+5220.495573058" Jan 30 17:49:31 crc kubenswrapper[4766]: I0130 17:49:31.040127 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:31 crc kubenswrapper[4766]: E0130 17:49:31.040754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:43 crc kubenswrapper[4766]: I0130 17:49:43.041466 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:43 crc kubenswrapper[4766]: E0130 17:49:43.043651 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:49:56 crc kubenswrapper[4766]: I0130 17:49:56.044307 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:49:56 crc kubenswrapper[4766]: E0130 17:49:56.045063 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:09 crc kubenswrapper[4766]: I0130 17:50:09.039298 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:09 crc kubenswrapper[4766]: E0130 17:50:09.040020 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:22 crc kubenswrapper[4766]: I0130 17:50:22.039217 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:22 crc kubenswrapper[4766]: E0130 17:50:22.040012 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:33 crc kubenswrapper[4766]: I0130 17:50:33.039768 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:33 crc kubenswrapper[4766]: E0130 17:50:33.040515 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:50:45 crc kubenswrapper[4766]: I0130 17:50:45.038931 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:50:45 crc kubenswrapper[4766]: I0130 17:50:45.470582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.022213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.023880 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.063386 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.064581 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.064677 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.067114 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.073682 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183604 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.183772 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.184004 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285870 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.285941 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286004 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286940 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.286957 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.307061 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"barbican-db-create-v7zdn\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.307199 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"barbican-360a-account-create-update-9fwlc\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.359605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.384342 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.809991 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 17:51:00 crc kubenswrapper[4766]: W0130 17:51:00.820921 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2bd9b1_3f21_43b5_ab17_c0724bbbafd9.slice/crio-fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce WatchSource:0}: Error finding container fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce: Status 404 returned error can't find the container with id fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce Jan 30 17:51:00 crc kubenswrapper[4766]: I0130 17:51:00.867856 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 17:51:00 crc kubenswrapper[4766]: W0130 17:51:00.868444 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa06091d_37e1_4828_9f71_7160f12ac3de.slice/crio-3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983 WatchSource:0}: Error finding container 3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983: Status 404 returned error can't find the container with id 3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593013 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerID="61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f" exitCode=0 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerDied","Data":"61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.593319 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerStarted","Data":"3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594877 4766 generic.go:334] "Generic (PLEG): container finished" podID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerID="b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36" exitCode=0 Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerDied","Data":"b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36"} Jan 30 17:51:01 crc kubenswrapper[4766]: I0130 17:51:01.594918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerStarted","Data":"fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce"} Jan 30 17:51:02 crc kubenswrapper[4766]: I0130 17:51:02.987813 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:02 crc kubenswrapper[4766]: I0130 17:51:02.993805 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.135673 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") pod \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.135945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") pod \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\" (UID: \"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136207 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") pod \"aa06091d-37e1-4828-9f71-7160f12ac3de\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136849 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") pod \"aa06091d-37e1-4828-9f71-7160f12ac3de\" (UID: \"aa06091d-37e1-4828-9f71-7160f12ac3de\") " Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.136654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" (UID: "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa06091d-37e1-4828-9f71-7160f12ac3de" (UID: "aa06091d-37e1-4828-9f71-7160f12ac3de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137840 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa06091d-37e1-4828-9f71-7160f12ac3de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.137921 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.141726 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x" (OuterVolumeSpecName: "kube-api-access-s4n4x") pod "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" (UID: "3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9"). InnerVolumeSpecName "kube-api-access-s4n4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.142093 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht" (OuterVolumeSpecName: "kube-api-access-pb9ht") pod "aa06091d-37e1-4828-9f71-7160f12ac3de" (UID: "aa06091d-37e1-4828-9f71-7160f12ac3de"). InnerVolumeSpecName "kube-api-access-pb9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.242914 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n4x\" (UniqueName: \"kubernetes.io/projected/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9-kube-api-access-s4n4x\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.242946 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb9ht\" (UniqueName: \"kubernetes.io/projected/aa06091d-37e1-4828-9f71-7160f12ac3de-kube-api-access-pb9ht\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.610889 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-360a-account-create-update-9fwlc" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.610881 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-360a-account-create-update-9fwlc" event={"ID":"aa06091d-37e1-4828-9f71-7160f12ac3de","Type":"ContainerDied","Data":"3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983"} Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.611031 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eed4057c6d5e23b070c8557123990e8fda61ee79aca7be4e57cae92caf7a983" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612527 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7zdn" event={"ID":"3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9","Type":"ContainerDied","Data":"fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce"} Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612555 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb61dacf00d0c321e5be9f820e9c55ef244873728712b98d1f6fbeb605fd0ce" Jan 30 17:51:03 crc kubenswrapper[4766]: I0130 17:51:03.612706 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7zdn" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.338052 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:05 crc kubenswrapper[4766]: E0130 17:51:05.338936 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.338960 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: E0130 17:51:05.339000 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339009 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339264 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" containerName="mariadb-database-create" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.339282 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" containerName="mariadb-account-create-update" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.340240 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.342929 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ck5sq" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.344379 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.350866 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.480813 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.480980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.481191 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.583664 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.588903 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.601803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.603703 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"barbican-db-sync-h2fkl\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:05 crc kubenswrapper[4766]: I0130 17:51:05.665600 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.096842 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.633900 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerStarted","Data":"f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9"} Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.633947 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerStarted","Data":"37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd"} Jan 30 17:51:06 crc kubenswrapper[4766]: I0130 17:51:06.649111 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h2fkl" podStartSLOduration=1.649092902 podStartE2EDuration="1.649092902s" podCreationTimestamp="2026-01-30 17:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:06.645568025 +0000 UTC m=+5321.283525371" watchObservedRunningTime="2026-01-30 17:51:06.649092902 +0000 UTC m=+5321.287050248" Jan 30 17:51:08 crc kubenswrapper[4766]: I0130 17:51:08.653024 4766 generic.go:334] "Generic (PLEG): container finished" podID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerID="f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9" exitCode=0 Jan 30 17:51:08 crc kubenswrapper[4766]: I0130 17:51:08.653314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerDied","Data":"f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9"} Jan 30 17:51:09 crc kubenswrapper[4766]: I0130 17:51:09.905449 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069710 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069851 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.069955 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") pod \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\" (UID: \"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2\") " Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.077985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm" (OuterVolumeSpecName: "kube-api-access-g49wm") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "kube-api-access-g49wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.099832 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.104427 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" (UID: "b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171535 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171572 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.171586 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g49wm\" (UniqueName: \"kubernetes.io/projected/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2-kube-api-access-g49wm\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2fkl" event={"ID":"b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2","Type":"ContainerDied","Data":"37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd"} Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671097 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f54523533770e701f675d1a2a0a8445b848df53c1f3149ade26237977259fd" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.671148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2fkl" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.913886 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:10 crc kubenswrapper[4766]: E0130 17:51:10.914895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.914915 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.915115 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" containerName="barbican-db-sync" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.916269 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920271 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920610 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.920774 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ck5sq" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.930792 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.932299 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.937670 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 17:51:10 crc kubenswrapper[4766]: I0130 17:51:10.971241 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:10.998553 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.000846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001635 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.001931 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.002194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.069890 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.071803 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.107821 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114297 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114375 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114401 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114487 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114578 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.114605 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.115487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-logs\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.122893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.123022 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data-custom\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124216 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-config-data\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124454 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.126416 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.130901 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.124083 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-combined-ca-bundle\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.158255 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdshk\" (UniqueName: \"kubernetes.io/projected/eb8f2fee-863e-4c1e-90af-6ed7a631a4ac-kube-api-access-hdshk\") pod \"barbican-worker-84dcf975b7-fj984\" (UID: \"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac\") " pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216570 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216672 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216809 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216843 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.216988 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217070 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217091 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217126 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217215 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217239 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217259 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217278 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217303 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.217965 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6132938-2052-4889-b1d7-2e43deb664e1-logs\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.222201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-combined-ca-bundle\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.230257 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.238338 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx79n\" (UniqueName: \"kubernetes.io/projected/a6132938-2052-4889-b1d7-2e43deb664e1-kube-api-access-tx79n\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.242119 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84dcf975b7-fj984" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.242254 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6132938-2052-4889-b1d7-2e43deb664e1-config-data-custom\") pod \"barbican-keystone-listener-78445c974-66754\" (UID: \"a6132938-2052-4889-b1d7-2e43deb664e1\") " pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.252333 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78445c974-66754" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319557 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319883 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319936 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.319978 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320064 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320085 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320112 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320133 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320657 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0607eb3-be12-4282-ac48-55b5220b4888-logs\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.320922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.321047 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.322012 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.324494 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data-custom\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.324733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.325365 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-config-data\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.326803 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0607eb3-be12-4282-ac48-55b5220b4888-combined-ca-bundle\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.341801 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcndz\" (UniqueName: \"kubernetes.io/projected/c0607eb3-be12-4282-ac48-55b5220b4888-kube-api-access-vcndz\") pod \"barbican-api-5b7b4f6b66-crqxp\" (UID: \"c0607eb3-be12-4282-ac48-55b5220b4888\") " pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.346933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"dnsmasq-dns-6f4b85cbd9-qr7g8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.390821 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.524710 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.773085 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78445c974-66754"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.844768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84dcf975b7-fj984"] Jan 30 17:51:11 crc kubenswrapper[4766]: I0130 17:51:11.936850 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.114729 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7b4f6b66-crqxp"] Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694438 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"389cdc439394cf9e2a0253416f271aa4a58746f87c121c65b9b03c78fb1ceacd"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694834 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"78492f49f846b1256b3db7b53047261273622abe7a4794f3ed572978359ecc54"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.694852 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78445c974-66754" event={"ID":"a6132938-2052-4889-b1d7-2e43deb664e1","Type":"ContainerStarted","Data":"e7622063fdf190dc4ccf71c87249acb6f6959e90f9e8cb8f3a873c9d6a4c8cfe"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696279 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"dfd96894ccfafc22aae2011b188b54b8bf915c7933591a4684f23e54bdc33901"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696391 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696470 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696546 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"9bb9b9ad67a62b19a9976e1b2c313627445cd9818a62877c503776085a30fbd9"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.696607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7b4f6b66-crqxp" event={"ID":"c0607eb3-be12-4282-ac48-55b5220b4888","Type":"ContainerStarted","Data":"6ecac4918b239081671a08f85badf1a13396f5fe11e242a4bc6c0650658e4926"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698126 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"924476c1644746c6b40ebb18696b8734d8b090b6ff5fdb004ac01cb93030580a"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698169 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"923cd2a7cb9abde1b8b978ceae5b5b8a54640b6febc9cdeb634c4ce79ce28775"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.698203 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84dcf975b7-fj984" event={"ID":"eb8f2fee-863e-4c1e-90af-6ed7a631a4ac","Type":"ContainerStarted","Data":"535cb8b70fa905ffe5b07582d53b005fb3401118c255483eaa57449afdb1880e"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.702846 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cfd4446-3501-49ef-911f-360c75070ca8" containerID="2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a" exitCode=0 Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.702902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.703278 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerStarted","Data":"2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232"} Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.720505 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78445c974-66754" podStartSLOduration=2.7204847819999998 podStartE2EDuration="2.720484782s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.710420418 +0000 UTC m=+5327.348377764" watchObservedRunningTime="2026-01-30 17:51:12.720484782 +0000 UTC m=+5327.358442128" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.754118 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b7b4f6b66-crqxp" podStartSLOduration=1.754100866 podStartE2EDuration="1.754100866s" podCreationTimestamp="2026-01-30 17:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.750834247 +0000 UTC m=+5327.388791593" watchObservedRunningTime="2026-01-30 17:51:12.754100866 +0000 UTC m=+5327.392058212" Jan 30 17:51:12 crc kubenswrapper[4766]: I0130 17:51:12.772763 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84dcf975b7-fj984" podStartSLOduration=2.772743702 podStartE2EDuration="2.772743702s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:12.763730707 +0000 UTC m=+5327.401688063" watchObservedRunningTime="2026-01-30 17:51:12.772743702 +0000 UTC m=+5327.410701048" Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.716607 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerStarted","Data":"325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd"} Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.716921 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:13 crc kubenswrapper[4766]: I0130 17:51:13.742677 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" podStartSLOduration=3.742656293 podStartE2EDuration="3.742656293s" podCreationTimestamp="2026-01-30 17:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:13.738920591 +0000 UTC m=+5328.376877947" watchObservedRunningTime="2026-01-30 17:51:13.742656293 +0000 UTC m=+5328.380613639" Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.066290 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.081884 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xfq5b"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.393118 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.459206 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.459490 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" containerID="cri-o://0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" gracePeriod=10 Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.780985 4766 generic.go:334] "Generic (PLEG): container finished" podID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerID="0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" exitCode=0 Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.781335 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0"} Jan 30 17:51:21 crc kubenswrapper[4766]: I0130 17:51:21.999435 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.064327 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e74a4a8-0c9c-4bba-b839-4caeca1e9304" path="/var/lib/kubelet/pods/0e74a4a8-0c9c-4bba-b839-4caeca1e9304/volumes" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141662 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141805 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141929 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141970 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.141992 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") pod \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\" (UID: \"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2\") " Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.147091 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8" (OuterVolumeSpecName: "kube-api-access-kdrv8") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "kube-api-access-kdrv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.184102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.187926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.188872 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.189629 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config" (OuterVolumeSpecName: "config") pod "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" (UID: "d56c4a02-3a71-44af-b4e3-c01fdfe94aa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244397 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244439 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244453 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244468 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.244482 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdrv8\" (UniqueName: \"kubernetes.io/projected/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2-kube-api-access-kdrv8\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790003 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" event={"ID":"d56c4a02-3a71-44af-b4e3-c01fdfe94aa2","Type":"ContainerDied","Data":"8ceea891fbcb5fb421d81a3c1c5593d03fae3166d751648db9c3253347233743"} Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790060 4766 scope.go:117] "RemoveContainer" containerID="0a46cd154d575e3a8c79e1f39b696f40c2dd09cb6642b1622e60f70d1ca2fbf0" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.790223 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f4494f49-kmx27" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.832717 4766 scope.go:117] "RemoveContainer" containerID="221d49a1d4c421b4915316ea508e130c64fe759e3aa996c068719e4d84855633" Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.836992 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:22 crc kubenswrapper[4766]: I0130 17:51:22.855666 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77f4494f49-kmx27"] Jan 30 17:51:23 crc kubenswrapper[4766]: I0130 17:51:23.107955 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:23 crc kubenswrapper[4766]: I0130 17:51:23.180064 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7b4f6b66-crqxp" Jan 30 17:51:24 crc kubenswrapper[4766]: I0130 17:51:24.050983 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" path="/var/lib/kubelet/pods/d56c4a02-3a71-44af-b4e3-c01fdfe94aa2/volumes" Jan 30 17:51:31 crc kubenswrapper[4766]: I0130 17:51:31.730785 4766 scope.go:117] "RemoveContainer" containerID="a1009dde22ffcc8455d2189a3b2d9bd31c4314e79dc5a1b8bf480ca3671346fc" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.870432 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:34 crc kubenswrapper[4766]: E0130 17:51:34.871308 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871321 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: E0130 17:51:34.871342 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="init" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871348 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="init" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.871498 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56c4a02-3a71-44af-b4e3-c01fdfe94aa2" containerName="dnsmasq-dns" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.872116 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.881768 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.970816 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.971946 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.972642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.972836 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.973656 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 17:51:34 crc kubenswrapper[4766]: I0130 17:51:34.988489 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.074814 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.074881 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075232 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.075935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.095101 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"neutron-db-create-jdcqq\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.177244 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.177338 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.178084 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.186645 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.195422 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"neutron-7364-account-create-update-5qkkz\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.288925 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.608135 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 17:51:35 crc kubenswrapper[4766]: W0130 17:51:35.615437 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d09a627_470a_4719_a1d8_458eda413878.slice/crio-f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5 WatchSource:0}: Error finding container f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5: Status 404 returned error can't find the container with id f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5 Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.777314 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.885537 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerStarted","Data":"05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.887465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerStarted","Data":"b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.887520 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerStarted","Data":"f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5"} Jan 30 17:51:35 crc kubenswrapper[4766]: I0130 17:51:35.903592 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jdcqq" podStartSLOduration=1.9035686699999999 podStartE2EDuration="1.90356867s" podCreationTimestamp="2026-01-30 17:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:35.899814679 +0000 UTC m=+5350.537772035" watchObservedRunningTime="2026-01-30 17:51:35.90356867 +0000 UTC m=+5350.541526016" Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.896832 4766 generic.go:334] "Generic (PLEG): container finished" podID="9d09a627-470a-4719-a1d8-458eda413878" containerID="b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71" exitCode=0 Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.896917 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerDied","Data":"b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71"} Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.898788 4766 generic.go:334] "Generic (PLEG): container finished" podID="632e98c6-d202-4c07-9220-636bd07da76d" containerID="e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab" exitCode=0 Jan 30 17:51:36 crc kubenswrapper[4766]: I0130 17:51:36.898827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerDied","Data":"e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.383243 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.388462 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463162 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") pod \"632e98c6-d202-4c07-9220-636bd07da76d\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463359 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") pod \"9d09a627-470a-4719-a1d8-458eda413878\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463390 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") pod \"9d09a627-470a-4719-a1d8-458eda413878\" (UID: \"9d09a627-470a-4719-a1d8-458eda413878\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.463432 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") pod \"632e98c6-d202-4c07-9220-636bd07da76d\" (UID: \"632e98c6-d202-4c07-9220-636bd07da76d\") " Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.464807 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "632e98c6-d202-4c07-9220-636bd07da76d" (UID: "632e98c6-d202-4c07-9220-636bd07da76d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.464846 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d09a627-470a-4719-a1d8-458eda413878" (UID: "9d09a627-470a-4719-a1d8-458eda413878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.469597 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv" (OuterVolumeSpecName: "kube-api-access-dljxv") pod "632e98c6-d202-4c07-9220-636bd07da76d" (UID: "632e98c6-d202-4c07-9220-636bd07da76d"). InnerVolumeSpecName "kube-api-access-dljxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.471510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m" (OuterVolumeSpecName: "kube-api-access-h7n8m") pod "9d09a627-470a-4719-a1d8-458eda413878" (UID: "9d09a627-470a-4719-a1d8-458eda413878"). InnerVolumeSpecName "kube-api-access-h7n8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564674 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632e98c6-d202-4c07-9220-636bd07da76d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564698 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d09a627-470a-4719-a1d8-458eda413878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564708 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7n8m\" (UniqueName: \"kubernetes.io/projected/9d09a627-470a-4719-a1d8-458eda413878-kube-api-access-h7n8m\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.564717 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljxv\" (UniqueName: \"kubernetes.io/projected/632e98c6-d202-4c07-9220-636bd07da76d-kube-api-access-dljxv\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.913308 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jdcqq" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.914741 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jdcqq" event={"ID":"9d09a627-470a-4719-a1d8-458eda413878","Type":"ContainerDied","Data":"f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.914787 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0955baa7952a0b3e80dd0303be9dfdc6a839dab17620896df6b3aa5737a71e5" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916588 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7364-account-create-update-5qkkz" event={"ID":"632e98c6-d202-4c07-9220-636bd07da76d","Type":"ContainerDied","Data":"05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb"} Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916605 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05bce2aed477cea70dbc4a3338ad7356030c9b12fdfc1c75857a86ddbde346bb" Jan 30 17:51:38 crc kubenswrapper[4766]: I0130 17:51:38.916697 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7364-account-create-update-5qkkz" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249080 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:40 crc kubenswrapper[4766]: E0130 17:51:40.249922 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249942 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: E0130 17:51:40.249989 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.249999 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.250225 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="632e98c6-d202-4c07-9220-636bd07da76d" containerName="mariadb-account-create-update" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.250266 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d09a627-470a-4719-a1d8-458eda413878" containerName="mariadb-database-create" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.251123 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.252784 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.252988 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.253064 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dxxvc" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.259792 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393324 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393392 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.393417 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494953 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.494986 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.501700 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.502131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.510704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"neutron-db-sync-6cksv\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:40 crc kubenswrapper[4766]: I0130 17:51:40.614707 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.145943 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.941499 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerStarted","Data":"a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e"} Jan 30 17:51:41 crc kubenswrapper[4766]: I0130 17:51:41.941554 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerStarted","Data":"9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c"} Jan 30 17:51:44 crc kubenswrapper[4766]: I0130 17:51:44.964344 4766 generic.go:334] "Generic (PLEG): container finished" podID="1262aa38-ee4d-4579-b034-3669dd58a238" containerID="a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e" exitCode=0 Jan 30 17:51:44 crc kubenswrapper[4766]: I0130 17:51:44.964444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerDied","Data":"a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e"} Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.297839 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396382 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396432 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.396563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") pod \"1262aa38-ee4d-4579-b034-3669dd58a238\" (UID: \"1262aa38-ee4d-4579-b034-3669dd58a238\") " Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.405382 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w" (OuterVolumeSpecName: "kube-api-access-fmp8w") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "kube-api-access-fmp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.436691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.437241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config" (OuterVolumeSpecName: "config") pod "1262aa38-ee4d-4579-b034-3669dd58a238" (UID: "1262aa38-ee4d-4579-b034-3669dd58a238"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499265 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmp8w\" (UniqueName: \"kubernetes.io/projected/1262aa38-ee4d-4579-b034-3669dd58a238-kube-api-access-fmp8w\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499326 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.499344 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1262aa38-ee4d-4579-b034-3669dd58a238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cksv" event={"ID":"1262aa38-ee4d-4579-b034-3669dd58a238","Type":"ContainerDied","Data":"9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c"} Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980281 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9634c08dc94922e1eeb1ba8f3a871513e592fe18fa7b663776b867aaa7f35d7c" Jan 30 17:51:46 crc kubenswrapper[4766]: I0130 17:51:46.980292 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cksv" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.203321 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:47 crc kubenswrapper[4766]: E0130 17:51:47.204071 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.204099 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.204367 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" containerName="neutron-db-sync" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.205477 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.213727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.302592 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.304524 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.307913 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.308039 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.308245 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-dxxvc" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.312993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.313030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.313074 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.316703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.414948 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415012 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415125 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415159 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415217 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415245 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415270 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415293 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.415316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.416146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417005 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.417634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.448682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"dnsmasq-dns-7dc5cbf9f7-zkw64\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517328 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517473 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.517556 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.522689 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-httpd-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.523403 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-combined-ca-bundle\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.523741 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.524586 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-config\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.541417 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjgj\" (UniqueName: \"kubernetes.io/projected/f8fd7445-369a-43d1-8b68-6a3d7b2abbe3-kube-api-access-2sjgj\") pod \"neutron-577cfcb8f7-k7t7l\" (UID: \"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3\") " pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:47 crc kubenswrapper[4766]: I0130 17:51:47.637732 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:48 crc kubenswrapper[4766]: I0130 17:51:48.018158 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:51:48 crc kubenswrapper[4766]: I0130 17:51:48.357452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cfcb8f7-k7t7l"] Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:48.998534 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"b2dc6f2009589171a2fecbbf84375aa1b0bc4bfae1376d7014628cf51dddb1b0"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000046 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerID="55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729" exitCode=0 Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000737 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"bd74302c924e258fde4a2f09fea2671e40e3d24fd058c8828e9537e4000ff226"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000792 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cfcb8f7-k7t7l" event={"ID":"f8fd7445-369a-43d1-8b68-6a3d7b2abbe3","Type":"ContainerStarted","Data":"b35542c0826b0f1fa728f5e02e3f960926f34005248508c4b99a54bf50cb8f1f"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000841 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.000857 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerStarted","Data":"b79a964de471e1d1b203d59d894a14ac3d8e1bae897a81215e4af1ded098934b"} Jan 30 17:51:49 crc kubenswrapper[4766]: I0130 17:51:49.030656 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-577cfcb8f7-k7t7l" podStartSLOduration=2.030634033 podStartE2EDuration="2.030634033s" podCreationTimestamp="2026-01-30 17:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:49.029604415 +0000 UTC m=+5363.667561761" watchObservedRunningTime="2026-01-30 17:51:49.030634033 +0000 UTC m=+5363.668591389" Jan 30 17:51:50 crc kubenswrapper[4766]: I0130 17:51:50.009508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerStarted","Data":"90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4"} Jan 30 17:51:50 crc kubenswrapper[4766]: I0130 17:51:50.035067 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podStartSLOduration=3.035048542 podStartE2EDuration="3.035048542s" podCreationTimestamp="2026-01-30 17:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:51:50.029255524 +0000 UTC m=+5364.667212870" watchObservedRunningTime="2026-01-30 17:51:50.035048542 +0000 UTC m=+5364.673005888" Jan 30 17:51:51 crc kubenswrapper[4766]: I0130 17:51:51.029748 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.525328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.597072 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:57 crc kubenswrapper[4766]: I0130 17:51:57.597400 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" containerID="cri-o://325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" gracePeriod=10 Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079673 4766 generic.go:334] "Generic (PLEG): container finished" podID="8cfd4446-3501-49ef-911f-360c75070ca8" containerID="325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" exitCode=0 Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd"} Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079978 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" event={"ID":"8cfd4446-3501-49ef-911f-360c75070ca8","Type":"ContainerDied","Data":"2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232"} Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.079990 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.139118 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218553 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218731 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218833 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.218866 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") pod \"8cfd4446-3501-49ef-911f-360c75070ca8\" (UID: \"8cfd4446-3501-49ef-911f-360c75070ca8\") " Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.225047 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh" (OuterVolumeSpecName: "kube-api-access-s7fsh") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "kube-api-access-s7fsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.264413 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.267458 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.276424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config" (OuterVolumeSpecName: "config") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.305573 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8cfd4446-3501-49ef-911f-360c75070ca8" (UID: "8cfd4446-3501-49ef-911f-360c75070ca8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322937 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322968 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322979 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322986 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cfd4446-3501-49ef-911f-360c75070ca8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:58 crc kubenswrapper[4766]: I0130 17:51:58.322997 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7fsh\" (UniqueName: \"kubernetes.io/projected/8cfd4446-3501-49ef-911f-360c75070ca8-kube-api-access-s7fsh\") on node \"crc\" DevicePath \"\"" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.086653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f4b85cbd9-qr7g8" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.129368 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:51:59 crc kubenswrapper[4766]: E0130 17:51:59.133273 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cfd4446_3501_49ef_911f_360c75070ca8.slice/crio-2a00f6308abf923c4adfba878c7daf0c4fdb4080490739d33a8a3b9162feb232\": RecentStats: unable to find data in memory cache]" Jan 30 17:51:59 crc kubenswrapper[4766]: I0130 17:51:59.137698 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f4b85cbd9-qr7g8"] Jan 30 17:52:00 crc kubenswrapper[4766]: I0130 17:52:00.048421 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" path="/var/lib/kubelet/pods/8cfd4446-3501-49ef-911f-360c75070ca8/volumes" Jan 30 17:52:17 crc kubenswrapper[4766]: I0130 17:52:17.655331 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-577cfcb8f7-k7t7l" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.902493 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:23 crc kubenswrapper[4766]: E0130 17:52:23.903385 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903403 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: E0130 17:52:23.903428 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="init" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903436 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="init" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.903622 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfd4446-3501-49ef-911f-360c75070ca8" containerName="dnsmasq-dns" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.904461 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.911908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.959299 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:23 crc kubenswrapper[4766]: I0130 17:52:23.959378 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.015726 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.017219 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.034100 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.036283 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062568 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062705 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.062838 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.063432 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.083076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"glance-db-create-tm7r5\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.164409 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.164573 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.165220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.184987 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"glance-823f-account-create-update-pttr7\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.257859 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.338426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.782767 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 17:52:24 crc kubenswrapper[4766]: I0130 17:52:24.845369 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 17:52:24 crc kubenswrapper[4766]: W0130 17:52:24.851792 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf01e6326_2d83_4889_9b7a_f45b9f6f3063.slice/crio-90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff WatchSource:0}: Error finding container 90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff: Status 404 returned error can't find the container with id 90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310057 4766 generic.go:334] "Generic (PLEG): container finished" podID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerID="5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad" exitCode=0 Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310165 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerDied","Data":"5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.310226 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerStarted","Data":"5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316233 4766 generic.go:334] "Generic (PLEG): container finished" podID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerID="ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae" exitCode=0 Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316395 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerDied","Data":"ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae"} Jan 30 17:52:25 crc kubenswrapper[4766]: I0130 17:52:25.316503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerStarted","Data":"90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff"} Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.788704 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.794867 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920013 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") pod \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920076 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") pod \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\" (UID: \"f01e6326-2d83-4889-9b7a-f45b9f6f3063\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920268 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") pod \"5946960e-4a1d-4360-ae75-7648934eeb0c\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.920333 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") pod \"5946960e-4a1d-4360-ae75-7648934eeb0c\" (UID: \"5946960e-4a1d-4360-ae75-7648934eeb0c\") " Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.921050 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f01e6326-2d83-4889-9b7a-f45b9f6f3063" (UID: "f01e6326-2d83-4889-9b7a-f45b9f6f3063"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.921307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5946960e-4a1d-4360-ae75-7648934eeb0c" (UID: "5946960e-4a1d-4360-ae75-7648934eeb0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.926577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w" (OuterVolumeSpecName: "kube-api-access-tjc6w") pod "f01e6326-2d83-4889-9b7a-f45b9f6f3063" (UID: "f01e6326-2d83-4889-9b7a-f45b9f6f3063"). InnerVolumeSpecName "kube-api-access-tjc6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:26 crc kubenswrapper[4766]: I0130 17:52:26.927131 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml" (OuterVolumeSpecName: "kube-api-access-6zpml") pod "5946960e-4a1d-4360-ae75-7648934eeb0c" (UID: "5946960e-4a1d-4360-ae75-7648934eeb0c"). InnerVolumeSpecName "kube-api-access-6zpml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024930 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zpml\" (UniqueName: \"kubernetes.io/projected/5946960e-4a1d-4360-ae75-7648934eeb0c-kube-api-access-6zpml\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024960 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjc6w\" (UniqueName: \"kubernetes.io/projected/f01e6326-2d83-4889-9b7a-f45b9f6f3063-kube-api-access-tjc6w\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024970 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f01e6326-2d83-4889-9b7a-f45b9f6f3063-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.024980 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5946960e-4a1d-4360-ae75-7648934eeb0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348441 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tm7r5" event={"ID":"5946960e-4a1d-4360-ae75-7648934eeb0c","Type":"ContainerDied","Data":"5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca"} Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348490 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cbfbe2787c3b0e9fc4ebb7fda72b2f5db2fe014e9b366f10d95bd8db396a1ca" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.348562 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tm7r5" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351491 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-823f-account-create-update-pttr7" event={"ID":"f01e6326-2d83-4889-9b7a-f45b9f6f3063","Type":"ContainerDied","Data":"90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff"} Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351549 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-823f-account-create-update-pttr7" Jan 30 17:52:27 crc kubenswrapper[4766]: I0130 17:52:27.351556 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bb896a333417b0b7e62647ee38db8e1abbc7f45f83d7cd560326df62ac92ff" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186057 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:29 crc kubenswrapper[4766]: E0130 17:52:29.186761 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186779 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: E0130 17:52:29.186816 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.186824 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187005 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" containerName="mariadb-database-create" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187024 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" containerName="mariadb-account-create-update" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.187622 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.193323 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.193487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.203468 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276436 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276492 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.276811 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378084 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378194 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378267 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.378308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.385164 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.386436 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.392201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.412124 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"glance-db-sync-ngkz2\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:29 crc kubenswrapper[4766]: I0130 17:52:29.504130 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:30 crc kubenswrapper[4766]: I0130 17:52:30.061684 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 17:52:30 crc kubenswrapper[4766]: I0130 17:52:30.375538 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerStarted","Data":"0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d"} Jan 30 17:52:31 crc kubenswrapper[4766]: I0130 17:52:31.393049 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerStarted","Data":"c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1"} Jan 30 17:52:31 crc kubenswrapper[4766]: I0130 17:52:31.411090 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ngkz2" podStartSLOduration=2.411067846 podStartE2EDuration="2.411067846s" podCreationTimestamp="2026-01-30 17:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:31.40758135 +0000 UTC m=+5406.045538706" watchObservedRunningTime="2026-01-30 17:52:31.411067846 +0000 UTC m=+5406.049025192" Jan 30 17:52:34 crc kubenswrapper[4766]: I0130 17:52:34.417484 4766 generic.go:334] "Generic (PLEG): container finished" podID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerID="c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1" exitCode=0 Jan 30 17:52:34 crc kubenswrapper[4766]: I0130 17:52:34.417565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerDied","Data":"c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1"} Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.785876 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917613 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917697 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917809 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.917929 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") pod \"fca69b03-2748-4111-8dd8-0cc28cf328d3\" (UID: \"fca69b03-2748-4111-8dd8-0cc28cf328d3\") " Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.923411 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7" (OuterVolumeSpecName: "kube-api-access-xhlz7") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "kube-api-access-xhlz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.923684 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.943519 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:35 crc kubenswrapper[4766]: I0130 17:52:35.967163 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data" (OuterVolumeSpecName: "config-data") pod "fca69b03-2748-4111-8dd8-0cc28cf328d3" (UID: "fca69b03-2748-4111-8dd8-0cc28cf328d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020365 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020416 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020434 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca69b03-2748-4111-8dd8-0cc28cf328d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.020453 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhlz7\" (UniqueName: \"kubernetes.io/projected/fca69b03-2748-4111-8dd8-0cc28cf328d3-kube-api-access-xhlz7\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436113 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ngkz2" event={"ID":"fca69b03-2748-4111-8dd8-0cc28cf328d3","Type":"ContainerDied","Data":"0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d"} Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436161 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc6de5d813b15fbe8b2b6ce02d0d20c213af5cf7a77ca5fa196e374fca2b94d" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.436278 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ngkz2" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704060 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: E0130 17:52:36.704432 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704696 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.704882 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" containerName="glance-db-sync" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.705804 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.709436 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.710003 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.710615 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.712060 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.742727 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834643 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834702 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.834737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835321 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835364 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.835579 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.837757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.846903 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.915774 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.921997 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.924621 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938233 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938300 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938374 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938443 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938514 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938549 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938597 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938676 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938753 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.938777 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.939501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.942913 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.945994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.946088 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.947457 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.948407 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.949170 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:36 crc kubenswrapper[4766]: I0130 17:52:36.956971 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"glance-default-external-api-0\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.031317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039817 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039867 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039890 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.039913 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040205 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040330 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040421 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040467 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040534 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040926 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.040977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.041259 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.041962 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.060636 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"dnsmasq-dns-548c78df-gwvnq\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142819 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142842 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142888 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.142944 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.143203 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.143317 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.151535 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.153354 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.156629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.162781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.164658 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.166752 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.235813 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.618876 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.683494 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:52:37 crc kubenswrapper[4766]: W0130 17:52:37.689293 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf59ac31c_2444_4acf_b7a1_d4bce77181bf.slice/crio-1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d WatchSource:0}: Error finding container 1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d: Status 404 returned error can't find the container with id 1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.796641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:37 crc kubenswrapper[4766]: I0130 17:52:37.925985 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:38 crc kubenswrapper[4766]: W0130 17:52:38.001848 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bbeed9_9ddf_41e7_b48f_d56bb0f18cf7.slice/crio-01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970 WatchSource:0}: Error finding container 01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970: Status 404 returned error can't find the container with id 01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970 Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.460930 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.460974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"ce16efda73744e8835d19632655d30fbc343d2c46facf078ece46d87dcbd8fe6"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465329 4766 generic.go:334] "Generic (PLEG): container finished" podID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" exitCode=0 Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465407 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.465436 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerStarted","Data":"1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d"} Jan 30 17:52:38 crc kubenswrapper[4766]: I0130 17:52:38.467329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.477341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.477690 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerStarted","Data":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478828 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerStarted","Data":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478912 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" containerID="cri-o://1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" gracePeriod=30 Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.478887 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" containerID="cri-o://da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" gracePeriod=30 Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.482166 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerStarted","Data":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.483055 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.496569 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.496549936 podStartE2EDuration="3.496549936s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.495938749 +0000 UTC m=+5414.133896115" watchObservedRunningTime="2026-01-30 17:52:39.496549936 +0000 UTC m=+5414.134507282" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.519334 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podStartSLOduration=3.5193150539999998 podStartE2EDuration="3.519315054s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.515357147 +0000 UTC m=+5414.153314503" watchObservedRunningTime="2026-01-30 17:52:39.519315054 +0000 UTC m=+5414.157272400" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.541094 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.541080596 podStartE2EDuration="3.541080596s" podCreationTimestamp="2026-01-30 17:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:39.54012969 +0000 UTC m=+5414.178087046" watchObservedRunningTime="2026-01-30 17:52:39.541080596 +0000 UTC m=+5414.179037942" Jan 30 17:52:39 crc kubenswrapper[4766]: I0130 17:52:39.669796 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.170437 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229623 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229791 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.229933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230002 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230025 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") pod \"cc97dcd2-d933-4049-b658-f84b0a58dceb\" (UID: \"cc97dcd2-d933-4049-b658-f84b0a58dceb\") " Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.230969 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs" (OuterVolumeSpecName: "logs") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.237534 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph" (OuterVolumeSpecName: "ceph") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.243826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt" (OuterVolumeSpecName: "kube-api-access-dpxlt") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "kube-api-access-dpxlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.243976 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts" (OuterVolumeSpecName: "scripts") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.255914 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.280059 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data" (OuterVolumeSpecName: "config-data") pod "cc97dcd2-d933-4049-b658-f84b0a58dceb" (UID: "cc97dcd2-d933-4049-b658-f84b0a58dceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.332742 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333028 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333138 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333247 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc97dcd2-d933-4049-b658-f84b0a58dceb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333334 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333492 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpxlt\" (UniqueName: \"kubernetes.io/projected/cc97dcd2-d933-4049-b658-f84b0a58dceb-kube-api-access-dpxlt\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.333594 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc97dcd2-d933-4049-b658-f84b0a58dceb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491875 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" exitCode=0 Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.492913 4766 generic.go:334] "Generic (PLEG): container finished" podID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" exitCode=143 Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491940 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.491918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493108 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc97dcd2-d933-4049-b658-f84b0a58dceb","Type":"ContainerDied","Data":"ce16efda73744e8835d19632655d30fbc343d2c46facf078ece46d87dcbd8fe6"} Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.493200 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.531078 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.542645 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.543761 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569520 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.569964 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569977 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.569992 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.569998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.570165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-httpd" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.570214 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" containerName="glance-log" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.571224 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.574583 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.578800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.588927 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.594505 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.594589 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} err="failed to get container status \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.594631 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: E0130 17:52:40.595125 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595196 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} err="failed to get container status \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595237 4766 scope.go:117] "RemoveContainer" containerID="1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726"} err="failed to get container status \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": rpc error: code = NotFound desc = could not find container \"1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726\": container with ID starting with 1a4a00f692eced11fffc033594eb2387dc3616d4a948ff56de95719e2d4e4726 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595591 4766 scope.go:117] "RemoveContainer" containerID="da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.595792 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841"} err="failed to get container status \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": rpc error: code = NotFound desc = could not find container \"da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841\": container with ID starting with da72c68311755ea25dda758df314f72c2d8b6fed065976f99b765262fd636841 not found: ID does not exist" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638616 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638691 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638749 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638884 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.638938 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.740947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741008 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741573 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.741642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.742495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.743103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746356 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746612 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.746757 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.747981 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.759906 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"glance-default-external-api-0\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " pod="openstack/glance-default-external-api-0" Jan 30 17:52:40 crc kubenswrapper[4766]: I0130 17:52:40.909571 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.432908 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:52:41 crc kubenswrapper[4766]: W0130 17:52:41.433577 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7946b0e6_2de2_4708_ac83_ce1ad398d8a5.slice/crio-d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b WatchSource:0}: Error finding container d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b: Status 404 returned error can't find the container with id d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.523231 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b"} Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.524503 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" containerID="cri-o://0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" gracePeriod=30 Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.524629 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" containerID="cri-o://ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" gracePeriod=30 Jan 30 17:52:41 crc kubenswrapper[4766]: I0130 17:52:41.992147 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.053649 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc97dcd2-d933-4049-b658-f84b0a58dceb" path="/var/lib/kubelet/pods/cc97dcd2-d933-4049-b658-f84b0a58dceb/volumes" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.064888 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065011 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065080 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065115 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065728 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065841 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") pod \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\" (UID: \"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7\") " Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.065915 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.066258 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs" (OuterVolumeSpecName: "logs") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.067902 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.067923 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.069599 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph" (OuterVolumeSpecName: "ceph") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.070087 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts" (OuterVolumeSpecName: "scripts") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.075426 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k" (OuterVolumeSpecName: "kube-api-access-gwb5k") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "kube-api-access-gwb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.093353 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.128108 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data" (OuterVolumeSpecName: "config-data") pod "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" (UID: "75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.169973 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170013 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170023 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170032 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.170042 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwb5k\" (UniqueName: \"kubernetes.io/projected/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7-kube-api-access-gwb5k\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.534313 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.534369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerStarted","Data":"cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535929 4766 generic.go:334] "Generic (PLEG): container finished" podID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" exitCode=0 Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535964 4766 generic.go:334] "Generic (PLEG): container finished" podID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" exitCode=143 Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.535986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536013 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536027 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7","Type":"ContainerDied","Data":"01ef7cb908a67430e0629dbdba0634f3d450b321b3a44e9f49460e7da28dd970"} Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536047 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.536436 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.575687 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.575657271 podStartE2EDuration="2.575657271s" podCreationTimestamp="2026-01-30 17:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:42.559844111 +0000 UTC m=+5417.197801467" watchObservedRunningTime="2026-01-30 17:52:42.575657271 +0000 UTC m=+5417.213614617" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.589479 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.607257 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.621108 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.627699 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.628964 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629000 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} err="failed to get container status \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629026 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.629426 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629444 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} err="failed to get container status \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629457 4766 scope.go:117] "RemoveContainer" containerID="ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629677 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852"} err="failed to get container status \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": rpc error: code = NotFound desc = could not find container \"ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852\": container with ID starting with ed451d7a4e20841ebf8afa65c6b3d96b76fbb339253b21e38aa17b512e3a5852 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.629690 4766 scope.go:117] "RemoveContainer" containerID="0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.630686 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23"} err="failed to get container status \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": rpc error: code = NotFound desc = could not find container \"0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23\": container with ID starting with 0e49f4f829acfc6a2cd703577ac126098cca854f5401088b0363fe918a8b7f23 not found: ID does not exist" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.636630 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.637105 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637123 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: E0130 17:52:42.637165 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637171 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637355 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-httpd" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.637380 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" containerName="glance-log" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.638517 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.642643 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.651302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.686807 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.686964 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.687124 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.687243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690127 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.690168 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791632 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791747 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791818 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.791855 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.792571 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.798954 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.801699 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.808128 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.811701 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.815710 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.832958 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"glance-default-internal-api-0\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:52:42 crc kubenswrapper[4766]: I0130 17:52:42.999206 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:43 crc kubenswrapper[4766]: I0130 17:52:43.527973 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.054755 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7" path="/var/lib/kubelet/pods/75bbeed9-9ddf-41e7-b48f-d56bb0f18cf7/volumes" Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563773 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.563827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerStarted","Data":"a636aed8819668fe27e888c223782c929538ea199ee28b047c4b35c7334f0992"} Jan 30 17:52:44 crc kubenswrapper[4766]: I0130 17:52:44.590925 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.590902241 podStartE2EDuration="2.590902241s" podCreationTimestamp="2026-01-30 17:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:52:44.582357169 +0000 UTC m=+5419.220314525" watchObservedRunningTime="2026-01-30 17:52:44.590902241 +0000 UTC m=+5419.228859587" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.158394 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.224624 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.224906 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" containerID="cri-o://90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" gracePeriod=10 Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.591575 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerID="90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" exitCode=0 Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.591664 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4"} Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.686860 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.801961 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802050 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802101 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802194 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.802234 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") pod \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\" (UID: \"3a7525bc-5e61-4580-b6ec-03ee13b7eefe\") " Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.817563 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp" (OuterVolumeSpecName: "kube-api-access-5fvkp") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "kube-api-access-5fvkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.845369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.845399 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.846640 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config" (OuterVolumeSpecName: "config") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.856830 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a7525bc-5e61-4580-b6ec-03ee13b7eefe" (UID: "3a7525bc-5e61-4580-b6ec-03ee13b7eefe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904662 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904697 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvkp\" (UniqueName: \"kubernetes.io/projected/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-kube-api-access-5fvkp\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904708 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904718 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:47 crc kubenswrapper[4766]: I0130 17:52:47.904726 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7525bc-5e61-4580-b6ec-03ee13b7eefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606651 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" event={"ID":"3a7525bc-5e61-4580-b6ec-03ee13b7eefe","Type":"ContainerDied","Data":"b79a964de471e1d1b203d59d894a14ac3d8e1bae897a81215e4af1ded098934b"} Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606716 4766 scope.go:117] "RemoveContainer" containerID="90a0a5811dcd0404a316f42d00527453af74b9dc4dd4a141b0ba0cd2e2cf54c4" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.606795 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.641379 4766 scope.go:117] "RemoveContainer" containerID="55c241c1b1860be383ecda1eec34453e72d6dcb7f7ddf745097a4fb7e9ad2729" Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.645808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:48 crc kubenswrapper[4766]: I0130 17:52:48.656293 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dc5cbf9f7-zkw64"] Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.052889 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" path="/var/lib/kubelet/pods/3a7525bc-5e61-4580-b6ec-03ee13b7eefe/volumes" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.909924 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.910328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.949382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:52:50 crc kubenswrapper[4766]: I0130 17:52:50.955249 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:52:51 crc kubenswrapper[4766]: I0130 17:52:51.637855 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:52:51 crc kubenswrapper[4766]: I0130 17:52:51.638007 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:52:52 crc kubenswrapper[4766]: I0130 17:52:52.525345 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7dc5cbf9f7-zkw64" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.30:5353: i/o timeout" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.000127 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.000580 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.036193 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.052781 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.610262 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.633197 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.658815 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:53 crc kubenswrapper[4766]: I0130 17:52:53.658848 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:55 crc kubenswrapper[4766]: I0130 17:52:55.591326 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:52:55 crc kubenswrapper[4766]: I0130 17:52:55.600932 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.026213 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: E0130 17:53:01.027231 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027244 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: E0130 17:53:01.027281 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="init" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027290 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="init" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.027522 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7525bc-5e61-4580-b6ec-03ee13b7eefe" containerName="dnsmasq-dns" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.028349 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.030619 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.032329 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.033852 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.046818 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.057418 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.115893 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.115994 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.116322 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.116489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220325 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220394 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.220495 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.221092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.221344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.245089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"placement-db-create-5n9p6\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.245111 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"placement-4b67-account-create-update-85sd5\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.365029 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.376618 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:01 crc kubenswrapper[4766]: W0130 17:53:01.807733 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ade9e5_b989_431e_995d_1dec1432ed75.slice/crio-96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3 WatchSource:0}: Error finding container 96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3: Status 404 returned error can't find the container with id 96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3 Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.816664 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 17:53:01 crc kubenswrapper[4766]: I0130 17:53:01.889087 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 17:53:01 crc kubenswrapper[4766]: W0130 17:53:01.890118 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb39a90f_2911_4e3f_a034_025eb6f8077d.slice/crio-c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1 WatchSource:0}: Error finding container c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1: Status 404 returned error can't find the container with id c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743070 4766 generic.go:334] "Generic (PLEG): container finished" podID="03ade9e5-b989-431e-995d-1dec1432ed75" containerID="cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db" exitCode=0 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerDied","Data":"cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.743788 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerStarted","Data":"96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746495 4766 generic.go:334] "Generic (PLEG): container finished" podID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerID="8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a" exitCode=0 Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerDied","Data":"8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a"} Jan 30 17:53:02 crc kubenswrapper[4766]: I0130 17:53:02.746579 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerStarted","Data":"c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.040565 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.137837 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172306 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") pod \"cb39a90f-2911-4e3f-a034-025eb6f8077d\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") pod \"cb39a90f-2911-4e3f-a034-025eb6f8077d\" (UID: \"cb39a90f-2911-4e3f-a034-025eb6f8077d\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.172780 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb39a90f-2911-4e3f-a034-025eb6f8077d" (UID: "cb39a90f-2911-4e3f-a034-025eb6f8077d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.173361 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb39a90f-2911-4e3f-a034-025eb6f8077d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.177196 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw" (OuterVolumeSpecName: "kube-api-access-lhwvw") pod "cb39a90f-2911-4e3f-a034-025eb6f8077d" (UID: "cb39a90f-2911-4e3f-a034-025eb6f8077d"). InnerVolumeSpecName "kube-api-access-lhwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274150 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") pod \"03ade9e5-b989-431e-995d-1dec1432ed75\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274241 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") pod \"03ade9e5-b989-431e-995d-1dec1432ed75\" (UID: \"03ade9e5-b989-431e-995d-1dec1432ed75\") " Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.274557 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03ade9e5-b989-431e-995d-1dec1432ed75" (UID: "03ade9e5-b989-431e-995d-1dec1432ed75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.275023 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhwvw\" (UniqueName: \"kubernetes.io/projected/cb39a90f-2911-4e3f-a034-025eb6f8077d-kube-api-access-lhwvw\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.275040 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ade9e5-b989-431e-995d-1dec1432ed75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.276788 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts" (OuterVolumeSpecName: "kube-api-access-pt5ts") pod "03ade9e5-b989-431e-995d-1dec1432ed75" (UID: "03ade9e5-b989-431e-995d-1dec1432ed75"). InnerVolumeSpecName "kube-api-access-pt5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.376409 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5ts\" (UniqueName: \"kubernetes.io/projected/03ade9e5-b989-431e-995d-1dec1432ed75-kube-api-access-pt5ts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.765882 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5n9p6" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.765895 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5n9p6" event={"ID":"03ade9e5-b989-431e-995d-1dec1432ed75","Type":"ContainerDied","Data":"96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.766018 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96021fa5a614247e55e96e2213f6e0ba4a531a15e49a3c793389ef7e4bd2ada3" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767899 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4b67-account-create-update-85sd5" event={"ID":"cb39a90f-2911-4e3f-a034-025eb6f8077d","Type":"ContainerDied","Data":"c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1"} Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767920 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0a69fb13f8ec100c5fd9ca16c3e8d401a863c48bfe8a607bfa99aa36d69b6f1" Jan 30 17:53:04 crc kubenswrapper[4766]: I0130 17:53:04.767994 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4b67-account-create-update-85sd5" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.442632 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:06 crc kubenswrapper[4766]: E0130 17:53:06.444914 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445019 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: E0130 17:53:06.445112 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445405 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" containerName="mariadb-database-create" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.445478 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" containerName="mariadb-account-create-update" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.450755 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.460580 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.481119 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.482273 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.484820 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bvph7" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.484987 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.488593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.507642 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.521944 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.521999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522027 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522140 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.522188 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623431 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623528 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623572 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623638 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623691 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623792 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.623821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.624747 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.625286 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.625812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.627116 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.647126 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"dnsmasq-dns-64fdd96cfc-xc6hw\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725367 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725462 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.725968 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.726031 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.726061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.728723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.728792 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.729614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.741775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"placement-db-sync-hn8dr\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.773763 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:06 crc kubenswrapper[4766]: I0130 17:53:06.799774 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.230870 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.339131 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 17:53:07 crc kubenswrapper[4766]: W0130 17:53:07.342387 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d89feb8_9495_4c8a_a424_37720df352bb.slice/crio-8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6 WatchSource:0}: Error finding container 8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6: Status 404 returned error can't find the container with id 8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6 Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.792760 4766 generic.go:334] "Generic (PLEG): container finished" podID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerID="97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939" exitCode=0 Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.792822 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.793470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerStarted","Data":"62402daa4d1e00e414a6153806e7a4ebba06101c39ecd01fd579e17d1df427fb"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.797446 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerStarted","Data":"5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.797478 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerStarted","Data":"8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6"} Jan 30 17:53:07 crc kubenswrapper[4766]: I0130 17:53:07.843680 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hn8dr" podStartSLOduration=1.843661135 podStartE2EDuration="1.843661135s" podCreationTimestamp="2026-01-30 17:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:07.839606614 +0000 UTC m=+5442.477563980" watchObservedRunningTime="2026-01-30 17:53:07.843661135 +0000 UTC m=+5442.481618481" Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.806361 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerStarted","Data":"5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a"} Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.807066 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:08 crc kubenswrapper[4766]: I0130 17:53:08.830134 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podStartSLOduration=2.830112375 podStartE2EDuration="2.830112375s" podCreationTimestamp="2026-01-30 17:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:08.821366177 +0000 UTC m=+5443.459323523" watchObservedRunningTime="2026-01-30 17:53:08.830112375 +0000 UTC m=+5443.468069721" Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.045719 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.045829 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.817066 4766 generic.go:334] "Generic (PLEG): container finished" podID="2d89feb8-9495-4c8a-a424-37720df352bb" containerID="5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2" exitCode=0 Jan 30 17:53:09 crc kubenswrapper[4766]: I0130 17:53:09.817128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerDied","Data":"5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2"} Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.190536 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318471 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318644 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.318767 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") pod \"2d89feb8-9495-4c8a-a424-37720df352bb\" (UID: \"2d89feb8-9495-4c8a-a424-37720df352bb\") " Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.319272 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs" (OuterVolumeSpecName: "logs") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.324493 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h" (OuterVolumeSpecName: "kube-api-access-62m8h") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "kube-api-access-62m8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.337475 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts" (OuterVolumeSpecName: "scripts") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.343646 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.345985 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data" (OuterVolumeSpecName: "config-data") pod "2d89feb8-9495-4c8a-a424-37720df352bb" (UID: "2d89feb8-9495-4c8a-a424-37720df352bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421485 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m8h\" (UniqueName: \"kubernetes.io/projected/2d89feb8-9495-4c8a-a424-37720df352bb-kube-api-access-62m8h\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421526 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421541 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421553 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d89feb8-9495-4c8a-a424-37720df352bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.421567 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d89feb8-9495-4c8a-a424-37720df352bb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876164 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn8dr" event={"ID":"2d89feb8-9495-4c8a-a424-37720df352bb","Type":"ContainerDied","Data":"8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6"} Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn8dr" Jan 30 17:53:11 crc kubenswrapper[4766]: I0130 17:53:11.876234 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8163ecab3c0c0c830276d748dc6bd3651231e4bd0ddebf23e1fc627650e600f6" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.300560 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:12 crc kubenswrapper[4766]: E0130 17:53:12.300986 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.300998 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.301165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" containerName="placement-db-sync" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.303221 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306109 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306267 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bvph7" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.306409 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.317495 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450095 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450207 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450451 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450646 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.450688 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.552643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553065 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553207 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553233 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.553755 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234231ef-1ed0-40ff-a4a8-0d9f533d39de-logs\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.557232 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-combined-ca-bundle\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.557408 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-config-data\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.558651 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234231ef-1ed0-40ff-a4a8-0d9f533d39de-scripts\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.573769 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5t8\" (UniqueName: \"kubernetes.io/projected/234231ef-1ed0-40ff-a4a8-0d9f533d39de-kube-api-access-st5t8\") pod \"placement-6cf79c7456-bp9jt\" (UID: \"234231ef-1ed0-40ff-a4a8-0d9f533d39de\") " pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:12 crc kubenswrapper[4766]: I0130 17:53:12.621336 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.081630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cf79c7456-bp9jt"] Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"901b04b4e1ac0fafdff2182ed215c2255dca7b47f2ab1f0665b6d4476dfdb4c9"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895814 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"3cdf4ccf7a30494a80084afaed49ab019233f05b0a510591d6225b7293978583"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895827 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cf79c7456-bp9jt" event={"ID":"234231ef-1ed0-40ff-a4a8-0d9f533d39de","Type":"ContainerStarted","Data":"239fd21e97b863b10dbab23654bad42aff7e4b17b3c9e3a5f993df36733b5427"} Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.895843 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:13 crc kubenswrapper[4766]: I0130 17:53:13.929796 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cf79c7456-bp9jt" podStartSLOduration=1.929772105 podStartE2EDuration="1.929772105s" podCreationTimestamp="2026-01-30 17:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:53:13.918037956 +0000 UTC m=+5448.555995302" watchObservedRunningTime="2026-01-30 17:53:13.929772105 +0000 UTC m=+5448.567729451" Jan 30 17:53:14 crc kubenswrapper[4766]: I0130 17:53:14.904933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.775690 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.857976 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:16 crc kubenswrapper[4766]: I0130 17:53:16.858241 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" containerID="cri-o://12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" gracePeriod=10 Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.328557 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441491 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441617 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441719 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441877 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.441905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") pod \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\" (UID: \"f59ac31c-2444-4acf-b7a1-d4bce77181bf\") " Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.447802 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln" (OuterVolumeSpecName: "kube-api-access-qcrln") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "kube-api-access-qcrln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.486574 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config" (OuterVolumeSpecName: "config") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.494022 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.497611 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.498062 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f59ac31c-2444-4acf-b7a1-d4bce77181bf" (UID: "f59ac31c-2444-4acf-b7a1-d4bce77181bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543655 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543688 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543697 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543763 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcrln\" (UniqueName: \"kubernetes.io/projected/f59ac31c-2444-4acf-b7a1-d4bce77181bf-kube-api-access-qcrln\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.543774 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59ac31c-2444-4acf-b7a1-d4bce77181bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932762 4766 generic.go:334] "Generic (PLEG): container finished" podID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" exitCode=0 Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932822 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548c78df-gwvnq" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932832 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932943 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548c78df-gwvnq" event={"ID":"f59ac31c-2444-4acf-b7a1-d4bce77181bf","Type":"ContainerDied","Data":"1f2e26cf3fa088fc28831e74273a48702602b1cf187d0ca6caaa1a82f45b271d"} Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.932978 4766 scope.go:117] "RemoveContainer" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.959608 4766 scope.go:117] "RemoveContainer" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.983074 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:17 crc kubenswrapper[4766]: I0130 17:53:17.991104 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548c78df-gwvnq"] Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.049281 4766 scope.go:117] "RemoveContainer" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:18 crc kubenswrapper[4766]: E0130 17:53:18.050270 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": container with ID starting with 12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c not found: ID does not exist" containerID="12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.050322 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c"} err="failed to get container status \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": rpc error: code = NotFound desc = could not find container \"12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c\": container with ID starting with 12b98c744346fb1a43ddf1ff58ce20a8705aecf44b159eaae4645c7fd037788c not found: ID does not exist" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.050356 4766 scope.go:117] "RemoveContainer" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:18 crc kubenswrapper[4766]: E0130 17:53:18.051068 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": container with ID starting with 7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16 not found: ID does not exist" containerID="7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.051103 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16"} err="failed to get container status \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": rpc error: code = NotFound desc = could not find container \"7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16\": container with ID starting with 7b3ff31ec71dbb0af27d4b0fdc9d8f231f39936bfca91b8179caf49331fe1f16 not found: ID does not exist" Jan 30 17:53:18 crc kubenswrapper[4766]: I0130 17:53:18.054146 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" path="/var/lib/kubelet/pods/f59ac31c-2444-4acf-b7a1-d4bce77181bf/volumes" Jan 30 17:53:22 crc kubenswrapper[4766]: I0130 17:53:22.157617 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-548c78df-gwvnq" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.36:5353: i/o timeout" Jan 30 17:53:31 crc kubenswrapper[4766]: I0130 17:53:31.813238 4766 scope.go:117] "RemoveContainer" containerID="b005c60a4add2d8581404792f9ce09c8f2b90990814a350d305efe960ab72a39" Jan 30 17:53:39 crc kubenswrapper[4766]: I0130 17:53:39.045478 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:53:39 crc kubenswrapper[4766]: I0130 17:53:39.046029 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:53:43 crc kubenswrapper[4766]: I0130 17:53:43.652601 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:53:44 crc kubenswrapper[4766]: I0130 17:53:44.734600 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cf79c7456-bp9jt" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.771467 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:07 crc kubenswrapper[4766]: E0130 17:54:07.772245 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772259 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: E0130 17:54:07.772285 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="init" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772293 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="init" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.772444 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59ac31c-2444-4acf-b7a1-d4bce77181bf" containerName="dnsmasq-dns" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.773044 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.838263 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.877126 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.878663 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.891153 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.943223 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.943452 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.972958 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.974194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.976454 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 17:54:07 crc kubenswrapper[4766]: I0130 17:54:07.982289 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052290 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052744 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.052923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.054761 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.069163 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.071256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.078644 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.084452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"nova-api-db-create-dwwb9\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.145248 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154631 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154744 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154775 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.154857 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.155894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.175632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"nova-cell0-db-create-hsbm5\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.184650 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.186426 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.191487 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.193742 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.202764 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.257936 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258113 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.258160 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.259386 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.290033 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"nova-api-4207-account-create-update-5677m\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.291321 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.344120 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.345805 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.348565 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359422 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359551 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359588 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.359609 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.365596 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.376909 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.379107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"nova-cell1-db-create-hkg9q\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.427199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461054 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461460 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461548 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.461596 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.462485 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.480988 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"nova-cell0-1549-account-create-update-qksfj\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.564214 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.564601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.565641 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.582065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"nova-cell1-4379-account-create-update-xxk7g\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.658239 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.677661 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.692034 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.821854 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.908463 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 17:54:08 crc kubenswrapper[4766]: W0130 17:54:08.916150 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230985b1_39a5_440c_b67a_97bed8481bd6.slice/crio-4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8 WatchSource:0}: Error finding container 4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8: Status 404 returned error can't find the container with id 4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8 Jan 30 17:54:08 crc kubenswrapper[4766]: I0130 17:54:08.986249 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047421 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047488 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.047547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.048595 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.048653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" gracePeriod=600 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.060170 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 17:54:09 crc kubenswrapper[4766]: W0130 17:54:09.072615 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2114339_89f3_4232_94e1_d4323d23978b.slice/crio-952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa WatchSource:0}: Error finding container 952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa: Status 404 returned error can't find the container with id 952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.182645 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 17:54:09 crc kubenswrapper[4766]: W0130 17:54:09.202805 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e9cfc2_7b7d_47eb_aece_ed9fe716594a.slice/crio-69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290 WatchSource:0}: Error finding container 69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290: Status 404 returned error can't find the container with id 69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.465053 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerStarted","Data":"952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470580 4766 generic.go:334] "Generic (PLEG): container finished" podID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerID="3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerDied","Data":"3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.470666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerStarted","Data":"8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.474802 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.474919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.475032 4766 scope.go:117] "RemoveContainer" containerID="7fc5828a50a187fe8ccf98b89913744accb72a8e7e151fd7516b11ddfbd0849c" Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.476645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerStarted","Data":"c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481522 4766 generic.go:334] "Generic (PLEG): container finished" podID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerID="c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481616 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerDied","Data":"c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.481655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerStarted","Data":"202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486257 4766 generic.go:334] "Generic (PLEG): container finished" podID="230985b1-39a5-440c-b67a-97bed8481bd6" containerID="afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638" exitCode=0 Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerDied","Data":"afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.486592 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerStarted","Data":"4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8"} Jan 30 17:54:09 crc kubenswrapper[4766]: I0130 17:54:09.489355 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerStarted","Data":"69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.502591 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.506798 4766 generic.go:334] "Generic (PLEG): container finished" podID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerID="84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.506919 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerDied","Data":"84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.509941 4766 generic.go:334] "Generic (PLEG): container finished" podID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerID="4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.510052 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerDied","Data":"4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.512551 4766 generic.go:334] "Generic (PLEG): container finished" podID="e2114339-89f3-4232-94e1-d4323d23978b" containerID="69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61" exitCode=0 Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.512619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerDied","Data":"69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61"} Jan 30 17:54:10 crc kubenswrapper[4766]: I0130 17:54:10.998856 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.005906 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.012498 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.115890 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") pod \"230985b1-39a5-440c-b67a-97bed8481bd6\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116320 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") pod \"230985b1-39a5-440c-b67a-97bed8481bd6\" (UID: \"230985b1-39a5-440c-b67a-97bed8481bd6\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116461 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") pod \"03cd48e2-831c-4067-ae82-6aa11c3ed219\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") pod \"eda85bd2-cef5-4dba-b322-a9f16aced872\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") pod \"03cd48e2-831c-4067-ae82-6aa11c3ed219\" (UID: \"03cd48e2-831c-4067-ae82-6aa11c3ed219\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.116632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") pod \"eda85bd2-cef5-4dba-b322-a9f16aced872\" (UID: \"eda85bd2-cef5-4dba-b322-a9f16aced872\") " Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.117860 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03cd48e2-831c-4067-ae82-6aa11c3ed219" (UID: "03cd48e2-831c-4067-ae82-6aa11c3ed219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.117907 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eda85bd2-cef5-4dba-b322-a9f16aced872" (UID: "eda85bd2-cef5-4dba-b322-a9f16aced872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.118280 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "230985b1-39a5-440c-b67a-97bed8481bd6" (UID: "230985b1-39a5-440c-b67a-97bed8481bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.125395 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb" (OuterVolumeSpecName: "kube-api-access-rkgdb") pod "230985b1-39a5-440c-b67a-97bed8481bd6" (UID: "230985b1-39a5-440c-b67a-97bed8481bd6"). InnerVolumeSpecName "kube-api-access-rkgdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.131342 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp" (OuterVolumeSpecName: "kube-api-access-v74xp") pod "03cd48e2-831c-4067-ae82-6aa11c3ed219" (UID: "03cd48e2-831c-4067-ae82-6aa11c3ed219"). InnerVolumeSpecName "kube-api-access-v74xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.131679 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj" (OuterVolumeSpecName: "kube-api-access-hs9bj") pod "eda85bd2-cef5-4dba-b322-a9f16aced872" (UID: "eda85bd2-cef5-4dba-b322-a9f16aced872"). InnerVolumeSpecName "kube-api-access-hs9bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220023 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03cd48e2-831c-4067-ae82-6aa11c3ed219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220075 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs9bj\" (UniqueName: \"kubernetes.io/projected/eda85bd2-cef5-4dba-b322-a9f16aced872-kube-api-access-hs9bj\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220092 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v74xp\" (UniqueName: \"kubernetes.io/projected/03cd48e2-831c-4067-ae82-6aa11c3ed219-kube-api-access-v74xp\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220107 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda85bd2-cef5-4dba-b322-a9f16aced872-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220121 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkgdb\" (UniqueName: \"kubernetes.io/projected/230985b1-39a5-440c-b67a-97bed8481bd6-kube-api-access-rkgdb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.220135 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230985b1-39a5-440c-b67a-97bed8481bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523404 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwwb9" event={"ID":"eda85bd2-cef5-4dba-b322-a9f16aced872","Type":"ContainerDied","Data":"8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523464 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee0f1d156658e15015a8f4ede4d9bf7567fcbe3196666eeaeba39144fe9c7a6" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.523488 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwwb9" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.527879 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hsbm5" event={"ID":"03cd48e2-831c-4067-ae82-6aa11c3ed219","Type":"ContainerDied","Data":"202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.527935 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202bc9e4e5f83742e250c3aba46345f39d24bdf37e9b3af3dd0ed7e6f1d63c64" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.528009 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hsbm5" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.531580 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4207-account-create-update-5677m" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.533338 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4207-account-create-update-5677m" event={"ID":"230985b1-39a5-440c-b67a-97bed8481bd6","Type":"ContainerDied","Data":"4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8"} Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.533387 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df8453040f71d801da8b8ca0a2ae0dde4dd8c9c7fe86c544648e44c452883c8" Jan 30 17:54:11 crc kubenswrapper[4766]: I0130 17:54:11.927810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.029416 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036011 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") pod \"caa501cc-1f23-4a0c-b845-31c9ae218be6\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036322 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") pod \"caa501cc-1f23-4a0c-b845-31c9ae218be6\" (UID: \"caa501cc-1f23-4a0c-b845-31c9ae218be6\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.036903 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caa501cc-1f23-4a0c-b845-31c9ae218be6" (UID: "caa501cc-1f23-4a0c-b845-31c9ae218be6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.061525 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd" (OuterVolumeSpecName: "kube-api-access-k27qd") pod "caa501cc-1f23-4a0c-b845-31c9ae218be6" (UID: "caa501cc-1f23-4a0c-b845-31c9ae218be6"). InnerVolumeSpecName "kube-api-access-k27qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.062223 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caa501cc-1f23-4a0c-b845-31c9ae218be6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.088353 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165195 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") pod \"e2114339-89f3-4232-94e1-d4323d23978b\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") pod \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165385 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") pod \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\" (UID: \"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.165447 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") pod \"e2114339-89f3-4232-94e1-d4323d23978b\" (UID: \"e2114339-89f3-4232-94e1-d4323d23978b\") " Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.166140 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k27qd\" (UniqueName: \"kubernetes.io/projected/caa501cc-1f23-4a0c-b845-31c9ae218be6-kube-api-access-k27qd\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.167152 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2114339-89f3-4232-94e1-d4323d23978b" (UID: "e2114339-89f3-4232-94e1-d4323d23978b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.167279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" (UID: "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.169132 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758" (OuterVolumeSpecName: "kube-api-access-tq758") pod "e2114339-89f3-4232-94e1-d4323d23978b" (UID: "e2114339-89f3-4232-94e1-d4323d23978b"). InnerVolumeSpecName "kube-api-access-tq758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.171208 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4" (OuterVolumeSpecName: "kube-api-access-4szf4") pod "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" (UID: "a8e9cfc2-7b7d-47eb-aece-ed9fe716594a"). InnerVolumeSpecName "kube-api-access-4szf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.268949 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2114339-89f3-4232-94e1-d4323d23978b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.268996 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4szf4\" (UniqueName: \"kubernetes.io/projected/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-kube-api-access-4szf4\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.269027 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.269040 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq758\" (UniqueName: \"kubernetes.io/projected/e2114339-89f3-4232-94e1-d4323d23978b-kube-api-access-tq758\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552121 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1549-account-create-update-qksfj" event={"ID":"a8e9cfc2-7b7d-47eb-aece-ed9fe716594a","Type":"ContainerDied","Data":"69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552240 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69623ba06332d560d143325ba175dab6815a3ac4a1aeb10d6d5b6496ee8ea290" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.552148 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1549-account-create-update-qksfj" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554873 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" event={"ID":"e2114339-89f3-4232-94e1-d4323d23978b","Type":"ContainerDied","Data":"952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554942 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952cbf0601aef443296d22bf560f08f3ce7ab0143e573f7b64998299ae03d1aa" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.554952 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4379-account-create-update-xxk7g" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.557961 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hkg9q" event={"ID":"caa501cc-1f23-4a0c-b845-31c9ae218be6","Type":"ContainerDied","Data":"c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e"} Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.558019 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b789ef95f3bb31aab314d166e76c52bafd3d8c831caf0f2ec3ac9970ef8e2e" Jan 30 17:54:12 crc kubenswrapper[4766]: I0130 17:54:12.558101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hkg9q" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.480631 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481585 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481606 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481627 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481636 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481660 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481668 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481682 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481690 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481708 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481716 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: E0130 17:54:13.481730 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481737 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481918 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481931 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481942 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481966 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" containerName="mariadb-database-create" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.481981 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2114339-89f3-4232-94e1-d4323d23978b" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.482003 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" containerName="mariadb-account-create-update" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.482854 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.486222 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.487491 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cbjlt" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.489974 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.509633 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592219 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592271 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.592318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.698393 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699010 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.699072 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.706375 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.707099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.720000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.725723 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"nova-cell0-conductor-db-sync-jccb8\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:13 crc kubenswrapper[4766]: I0130 17:54:13.804088 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:14 crc kubenswrapper[4766]: I0130 17:54:14.348266 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 17:54:14 crc kubenswrapper[4766]: I0130 17:54:14.597874 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerStarted","Data":"da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131"} Jan 30 17:54:15 crc kubenswrapper[4766]: I0130 17:54:15.609889 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerStarted","Data":"b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256"} Jan 30 17:54:15 crc kubenswrapper[4766]: I0130 17:54:15.633752 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jccb8" podStartSLOduration=2.63373121 podStartE2EDuration="2.63373121s" podCreationTimestamp="2026-01-30 17:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:15.628005584 +0000 UTC m=+5510.265962960" watchObservedRunningTime="2026-01-30 17:54:15.63373121 +0000 UTC m=+5510.271688576" Jan 30 17:54:22 crc kubenswrapper[4766]: I0130 17:54:22.680158 4766 generic.go:334] "Generic (PLEG): container finished" podID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerID="b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256" exitCode=0 Jan 30 17:54:22 crc kubenswrapper[4766]: I0130 17:54:22.680340 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerDied","Data":"b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256"} Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.080946 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201170 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201584 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201747 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.201781 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") pod \"b37a2812-82ad-4535-84e6-569f9b3765a6\" (UID: \"b37a2812-82ad-4535-84e6-569f9b3765a6\") " Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.207429 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts" (OuterVolumeSpecName: "scripts") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.207587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz" (OuterVolumeSpecName: "kube-api-access-v5cwz") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "kube-api-access-v5cwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.227488 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.228362 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data" (OuterVolumeSpecName: "config-data") pod "b37a2812-82ad-4535-84e6-569f9b3765a6" (UID: "b37a2812-82ad-4535-84e6-569f9b3765a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303541 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303817 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303912 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5cwz\" (UniqueName: \"kubernetes.io/projected/b37a2812-82ad-4535-84e6-569f9b3765a6-kube-api-access-v5cwz\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.303996 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37a2812-82ad-4535-84e6-569f9b3765a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724305 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jccb8" event={"ID":"b37a2812-82ad-4535-84e6-569f9b3765a6","Type":"ContainerDied","Data":"da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131"} Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724388 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da172445bfeb555287be406b3b1bc1f24619d25d1d44b2a900720a4c67714131" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.724336 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jccb8" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.808679 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:24 crc kubenswrapper[4766]: E0130 17:54:24.809299 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.809323 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.809550 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" containerName="nova-cell0-conductor-db-sync" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.810339 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.815319 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.815534 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cbjlt" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.817944 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:24 crc kubenswrapper[4766]: I0130 17:54:24.914583 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015783 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015826 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.015905 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.020360 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.021367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.042379 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"nova-cell0-conductor-0\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.136526 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.575945 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:54:25 crc kubenswrapper[4766]: W0130 17:54:25.578446 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6725384_f878_416e_832e_64ea63dc6698.slice/crio-04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2 WatchSource:0}: Error finding container 04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2: Status 404 returned error can't find the container with id 04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2 Jan 30 17:54:25 crc kubenswrapper[4766]: I0130 17:54:25.735933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerStarted","Data":"04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2"} Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.747583 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerStarted","Data":"c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa"} Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.748050 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:26 crc kubenswrapper[4766]: I0130 17:54:26.781595 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7815695099999997 podStartE2EDuration="2.78156951s" podCreationTimestamp="2026-01-30 17:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:26.773316885 +0000 UTC m=+5521.411274261" watchObservedRunningTime="2026-01-30 17:54:26.78156951 +0000 UTC m=+5521.419526886" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.163360 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.559323 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.560757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.563542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.563826 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.568985 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.690626 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.694515 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.696738 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.702933 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722003 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.722194 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.771752 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.773106 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.776840 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.788470 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824139 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824241 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824348 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824379 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.824458 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.831469 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.835729 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.846764 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.848967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"nova-cell0-cell-mapping-5xsrx\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.867143 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.868651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.875216 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.883787 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926281 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926310 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926346 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926372 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926436 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.926468 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.927155 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.931626 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.931994 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.932610 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.959500 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.980384 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"nova-api-0\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " pod="openstack/nova-api-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.989018 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:30 crc kubenswrapper[4766]: I0130 17:54:30.996398 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.016689 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031198 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031277 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031474 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031498 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.031820 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.032043 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.043364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.043986 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.061198 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"nova-scheduler-0\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.088876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.092755 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.121683 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.123545 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.130246 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.135947 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136029 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136072 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136218 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.136251 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.137851 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.141336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.159434 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.200787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"nova-metadata-0\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.226627 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237460 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237594 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237630 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237655 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237678 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.237729 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.243121 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.259935 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.268200 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.342898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.342972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343028 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343076 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.343120 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344603 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.344790 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.362018 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.381961 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"dnsmasq-dns-8c8b5f8b9-npmjq\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.387938 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.484383 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.682018 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.784087 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.804463 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerStarted","Data":"8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08"} Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.837724 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.840138 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.844846 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.845892 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.850288 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.885907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.931387 4766 scope.go:117] "RemoveContainer" containerID="d83ad14fd8f4b675ceb3460a2bf958a20357e50f2d888a5402edc7fdebd9aa08" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.960862 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962339 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962410 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962434 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.962577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:31 crc kubenswrapper[4766]: W0130 17:54:31.967502 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f688a02_a337_43d9_9cc8_ca5d7ba19898.slice/crio-9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b WatchSource:0}: Error finding container 9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b: Status 404 returned error can't find the container with id 9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b Jan 30 17:54:31 crc kubenswrapper[4766]: I0130 17:54:31.984231 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.063951 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064001 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.064063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.069837 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.070709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.071672 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.086614 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"nova-cell1-conductor-db-sync-247jx\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.092276 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.098484 4766 scope.go:117] "RemoveContainer" containerID="1b90a80f4637be44b39402681550752b5fc9bcb70acb1239adbe9ebd8ef0ae15" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.194386 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:32 crc kubenswrapper[4766]: E0130 17:54:32.530458 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15b6b4f_b273_4ad3_bd5b_c8c21421d672.slice/crio-48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.660441 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 17:54:32 crc kubenswrapper[4766]: W0130 17:54:32.664223 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202a732a_6c9d_427a_9c87_af7c4af5d184.slice/crio-fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f WatchSource:0}: Error finding container fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f: Status 404 returned error can't find the container with id fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.829535 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerStarted","Data":"587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.829597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerStarted","Data":"9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.850365 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.850347009 podStartE2EDuration="2.850347009s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.847757538 +0000 UTC m=+5527.485714884" watchObservedRunningTime="2026-01-30 17:54:32.850347009 +0000 UTC m=+5527.488304345" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879314 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.879369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerStarted","Data":"172f11d1f08481e85c172028438948e00677ee40db5df64052d44e88f3ee8c9f"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882357 4766 generic.go:334] "Generic (PLEG): container finished" podID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" exitCode=0 Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.882751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerStarted","Data":"13c5060fcca39fb869c73e11390c606da85a656c1300d5ab6aa472270e9bf8ab"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884962 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.884975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerStarted","Data":"48c091b4127999cd92b0b2a6c8a5cc747b40f38f27c502854438f5732d970c5c"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.887514 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerStarted","Data":"fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.889157 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerStarted","Data":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.889220 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerStarted","Data":"92dbd7b1b8a472aec7c8d9dd2722ad2e6ddf00a37ec9c45580a2afbf75ca87fa"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.890995 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerStarted","Data":"a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af"} Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.907103 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.907081572 podStartE2EDuration="2.907081572s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.903875994 +0000 UTC m=+5527.541833350" watchObservedRunningTime="2026-01-30 17:54:32.907081572 +0000 UTC m=+5527.545038918" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.927484 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-247jx" podStartSLOduration=1.927463235 podStartE2EDuration="1.927463235s" podCreationTimestamp="2026-01-30 17:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.922624863 +0000 UTC m=+5527.560582209" watchObservedRunningTime="2026-01-30 17:54:32.927463235 +0000 UTC m=+5527.565420581" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.974274 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9742551170000002 podStartE2EDuration="2.974255117s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.971372069 +0000 UTC m=+5527.609329425" watchObservedRunningTime="2026-01-30 17:54:32.974255117 +0000 UTC m=+5527.612212453" Jan 30 17:54:32 crc kubenswrapper[4766]: I0130 17:54:32.992046 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.99202146 podStartE2EDuration="2.99202146s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:32.985647457 +0000 UTC m=+5527.623604803" watchObservedRunningTime="2026-01-30 17:54:32.99202146 +0000 UTC m=+5527.629978806" Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.010563 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5xsrx" podStartSLOduration=3.010542413 podStartE2EDuration="3.010542413s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:33.00160093 +0000 UTC m=+5527.639558276" watchObservedRunningTime="2026-01-30 17:54:33.010542413 +0000 UTC m=+5527.648499769" Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.905819 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerStarted","Data":"5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496"} Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.909697 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerStarted","Data":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} Jan 30 17:54:33 crc kubenswrapper[4766]: I0130 17:54:33.946615 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" podStartSLOduration=3.9465930240000002 podStartE2EDuration="3.946593024s" podCreationTimestamp="2026-01-30 17:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:33.935717098 +0000 UTC m=+5528.573674474" watchObservedRunningTime="2026-01-30 17:54:33.946593024 +0000 UTC m=+5528.584550370" Jan 30 17:54:34 crc kubenswrapper[4766]: I0130 17:54:34.916375 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:35 crc kubenswrapper[4766]: I0130 17:54:35.926473 4766 generic.go:334] "Generic (PLEG): container finished" podID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerID="5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496" exitCode=0 Jan 30 17:54:35 crc kubenswrapper[4766]: I0130 17:54:35.926574 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerDied","Data":"5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496"} Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.090284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.227411 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.227467 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.389609 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.936198 4766 generic.go:334] "Generic (PLEG): container finished" podID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerID="a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af" exitCode=0 Jan 30 17:54:36 crc kubenswrapper[4766]: I0130 17:54:36.936418 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerDied","Data":"a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af"} Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.290504 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388542 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388715 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388741 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.388780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") pod \"202a732a-6c9d-427a-9c87-af7c4af5d184\" (UID: \"202a732a-6c9d-427a-9c87-af7c4af5d184\") " Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.395404 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts" (OuterVolumeSpecName: "scripts") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.407399 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd" (OuterVolumeSpecName: "kube-api-access-2hsmd") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "kube-api-access-2hsmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.419349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.430328 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data" (OuterVolumeSpecName: "config-data") pod "202a732a-6c9d-427a-9c87-af7c4af5d184" (UID: "202a732a-6c9d-427a-9c87-af7c4af5d184"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491115 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491216 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsmd\" (UniqueName: \"kubernetes.io/projected/202a732a-6c9d-427a-9c87-af7c4af5d184-kube-api-access-2hsmd\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491249 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.491258 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202a732a-6c9d-427a-9c87-af7c4af5d184-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-247jx" event={"ID":"202a732a-6c9d-427a-9c87-af7c4af5d184","Type":"ContainerDied","Data":"fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f"} Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945675 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe924abbdcf3b481acc8f2f30bd7b8f39b64f1fdc382d7ba842e1f9b708fd84f" Jan 30 17:54:37 crc kubenswrapper[4766]: I0130 17:54:37.945653 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-247jx" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.052971 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: E0130 17:54:38.053311 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.053327 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.053495 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" containerName="nova-cell1-conductor-db-sync" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.054198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.055589 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.071660 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101092 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101332 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.101370 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.202792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.203236 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.203281 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.208925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.209004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.221762 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"nova-cell1-conductor-0\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.388469 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.419310 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513807 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513845 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513874 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.513933 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") pod \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\" (UID: \"083bdb6d-c3f3-412d-9097-48e66c7f28d0\") " Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.517279 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts" (OuterVolumeSpecName: "scripts") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.517687 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht" (OuterVolumeSpecName: "kube-api-access-9qrht") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "kube-api-access-9qrht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.538771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.589386 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data" (OuterVolumeSpecName: "config-data") pod "083bdb6d-c3f3-412d-9097-48e66c7f28d0" (UID: "083bdb6d-c3f3-412d-9097-48e66c7f28d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.616815 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617218 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617237 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083bdb6d-c3f3-412d-9097-48e66c7f28d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.617249 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qrht\" (UniqueName: \"kubernetes.io/projected/083bdb6d-c3f3-412d-9097-48e66c7f28d0-kube-api-access-9qrht\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.922829 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:54:38 crc kubenswrapper[4766]: W0130 17:54:38.925959 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ca03b3_7414_49ac_8fb1_7d2489d1c251.slice/crio-18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b WatchSource:0}: Error finding container 18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b: Status 404 returned error can't find the container with id 18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.957161 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerStarted","Data":"18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b"} Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.958961 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5xsrx" event={"ID":"083bdb6d-c3f3-412d-9097-48e66c7f28d0","Type":"ContainerDied","Data":"8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08"} Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.958992 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9ba534c0b1a1f9f460915fbcc26e1ca1c39179bfef7532d76f178d02f53c08" Jan 30 17:54:38 crc kubenswrapper[4766]: I0130 17:54:38.959055 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5xsrx" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.124916 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.125270 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" containerID="cri-o://e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.125428 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" containerID="cri-o://41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.152812 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.153461 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" containerID="cri-o://74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193686 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" containerID="cri-o://40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.193988 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" containerID="cri-o://8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" gracePeriod=30 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.691349 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748860 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.748916 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") pod \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\" (UID: \"7ff66025-4eb1-4da2-886f-e5ef9bf4831d\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.749307 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs" (OuterVolumeSpecName: "logs") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.753782 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t" (OuterVolumeSpecName: "kube-api-access-5tv7t") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "kube-api-access-5tv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.756036 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.784238 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.801570 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data" (OuterVolumeSpecName: "config-data") pod "7ff66025-4eb1-4da2-886f-e5ef9bf4831d" (UID: "7ff66025-4eb1-4da2-886f-e5ef9bf4831d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850171 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850317 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850399 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") pod \"10a919f2-e41c-45e8-ba7f-882408152952\" (UID: \"10a919f2-e41c-45e8-ba7f-882408152952\") " Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.850587 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs" (OuterVolumeSpecName: "logs") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851814 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tv7t\" (UniqueName: \"kubernetes.io/projected/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-kube-api-access-5tv7t\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851845 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851863 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851878 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff66025-4eb1-4da2-886f-e5ef9bf4831d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.851891 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a919f2-e41c-45e8-ba7f-882408152952-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.853751 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt" (OuterVolumeSpecName: "kube-api-access-nppkt") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "kube-api-access-nppkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.870926 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.877826 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data" (OuterVolumeSpecName: "config-data") pod "10a919f2-e41c-45e8-ba7f-882408152952" (UID: "10a919f2-e41c-45e8-ba7f-882408152952"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953147 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953188 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a919f2-e41c-45e8-ba7f-882408152952-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.953201 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nppkt\" (UniqueName: \"kubernetes.io/projected/10a919f2-e41c-45e8-ba7f-882408152952-kube-api-access-nppkt\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971609 4766 generic.go:334] "Generic (PLEG): container finished" podID="10a919f2-e41c-45e8-ba7f-882408152952" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" exitCode=0 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971663 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971672 4766 generic.go:334] "Generic (PLEG): container finished" podID="10a919f2-e41c-45e8-ba7f-882408152952" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" exitCode=143 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971671 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971751 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a919f2-e41c-45e8-ba7f-882408152952","Type":"ContainerDied","Data":"48c091b4127999cd92b0b2a6c8a5cc747b40f38f27c502854438f5732d970c5c"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.971783 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.976656 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerStarted","Data":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.977883 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.979989 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" exitCode=0 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980014 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" exitCode=143 Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980031 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980046 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980057 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ff66025-4eb1-4da2-886f-e5ef9bf4831d","Type":"ContainerDied","Data":"172f11d1f08481e85c172028438948e00677ee40db5df64052d44e88f3ee8c9f"} Jan 30 17:54:39 crc kubenswrapper[4766]: I0130 17:54:39.980102 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.000439 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.000393436 podStartE2EDuration="2.000393436s" podCreationTimestamp="2026-01-30 17:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:39.996690965 +0000 UTC m=+5534.634648311" watchObservedRunningTime="2026-01-30 17:54:40.000393436 +0000 UTC m=+5534.638350782" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.006487 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.028038 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.038500 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.042666 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.042720 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} err="failed to get container status \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.042752 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.044532 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.044599 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} err="failed to get container status \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.044631 4766 scope.go:117] "RemoveContainer" containerID="8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.045105 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2"} err="failed to get container status \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": rpc error: code = NotFound desc = could not find container \"8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2\": container with ID starting with 8c8a7551ecc3fbbbe1fbb478fce1f122ca45e30abb4e612c95ccf0bc7e68b4e2 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.045140 4766 scope.go:117] "RemoveContainer" containerID="40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.046044 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e"} err="failed to get container status \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": rpc error: code = NotFound desc = could not find container \"40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e\": container with ID starting with 40bf0f13fd5bcef58df4201f8bf2739cd3e440f1b138f60ee0532fff136fc71e not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.046075 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.061901 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.061941 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.068985 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.078075 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.086334 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.086888 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.086963 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087047 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.087102 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087171 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.087253 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.087329 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.089509 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.089623 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.089714 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090207 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" containerName="nova-manage" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090314 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-api" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090474 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-metadata" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090558 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" containerName="nova-api-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.090656 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a919f2-e41c-45e8-ba7f-882408152952" containerName="nova-metadata-log" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.093126 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.102220 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.105309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.120300 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.120677 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.121648 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.121829 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} err="failed to get container status \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.121969 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.126860 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: E0130 17:54:40.127487 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.130266 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} err="failed to get container status \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.130417 4766 scope.go:117] "RemoveContainer" containerID="41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.128896 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.132566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.135588 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491"} err="failed to get container status \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": rpc error: code = NotFound desc = could not find container \"41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491\": container with ID starting with 41a4d8172568caba641e9ac0436c5f3b1aaf335e2d914d7553e5d2e3af3e1491 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.138151 4766 scope.go:117] "RemoveContainer" containerID="e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.139202 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1"} err="failed to get container status \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": rpc error: code = NotFound desc = could not find container \"e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1\": container with ID starting with e35194ade8587723d8cb57e970764cad70a2f443d63f80d8d0f87f8168979ce1 not found: ID does not exist" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.157573 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.157991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158172 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158318 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158435 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.158984 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.261371 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.261772 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262230 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262391 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262485 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262610 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262740 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.262897 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.264708 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.265093 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.267464 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.269367 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.270114 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.278967 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.282119 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"nova-api-0\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.283922 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"nova-metadata-0\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.428772 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.452861 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.871459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:40 crc kubenswrapper[4766]: W0130 17:54:40.871921 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod450c67ff_a16a_43cf_8852_663c4c0073af.slice/crio-64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab WatchSource:0}: Error finding container 64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab: Status 404 returned error can't find the container with id 64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab Jan 30 17:54:40 crc kubenswrapper[4766]: I0130 17:54:40.994134 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab"} Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.034133 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.389607 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.404923 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.486468 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.564771 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.565486 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" containerID="cri-o://5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" gracePeriod=10 Jan 30 17:54:41 crc kubenswrapper[4766]: I0130 17:54:41.774715 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.42:5353: connect: connection refused" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.018772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.019063 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerStarted","Data":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.047405 4766 generic.go:334] "Generic (PLEG): container finished" podID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerID="5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" exitCode=0 Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.071620 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a919f2-e41c-45e8-ba7f-882408152952" path="/var/lib/kubelet/pods/10a919f2-e41c-45e8-ba7f-882408152952/volumes" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.072551 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff66025-4eb1-4da2-886f-e5ef9bf4831d" path="/var/lib/kubelet/pods/7ff66025-4eb1-4da2-886f-e5ef9bf4831d/volumes" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073217 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.073201182 podStartE2EDuration="2.073201182s" podCreationTimestamp="2026-01-30 17:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:42.062661445 +0000 UTC m=+5536.700618811" watchObservedRunningTime="2026-01-30 17:54:42.073201182 +0000 UTC m=+5536.711158538" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073359 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073447 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.073462 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerStarted","Data":"0b18c0a6248e0f08e59e0f76327c26fc51a0ee3b357d761ada211d388f46fe36"} Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.095993 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.095972521 podStartE2EDuration="2.095972521s" podCreationTimestamp="2026-01-30 17:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:42.084754116 +0000 UTC m=+5536.722711462" watchObservedRunningTime="2026-01-30 17:54:42.095972521 +0000 UTC m=+5536.733929867" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.148167 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250605 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250663 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250758 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250803 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.250838 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") pod \"df37c2c0-49c6-46b4-a4c9-085cad77c471\" (UID: \"df37c2c0-49c6-46b4-a4c9-085cad77c471\") " Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.255346 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4" (OuterVolumeSpecName: "kube-api-access-sscz4") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "kube-api-access-sscz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.294694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config" (OuterVolumeSpecName: "config") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.295437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.298144 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.303115 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df37c2c0-49c6-46b4-a4c9-085cad77c471" (UID: "df37c2c0-49c6-46b4-a4c9-085cad77c471"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353444 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353481 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sscz4\" (UniqueName: \"kubernetes.io/projected/df37c2c0-49c6-46b4-a4c9-085cad77c471-kube-api-access-sscz4\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353495 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353513 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:42 crc kubenswrapper[4766]: I0130 17:54:42.353544 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df37c2c0-49c6-46b4-a4c9-085cad77c471-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.065107 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.066365 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fdd96cfc-xc6hw" event={"ID":"df37c2c0-49c6-46b4-a4c9-085cad77c471","Type":"ContainerDied","Data":"62402daa4d1e00e414a6153806e7a4ebba06101c39ecd01fd579e17d1df427fb"} Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.066479 4766 scope.go:117] "RemoveContainer" containerID="5833d194064bb1f8316a6b4185acea8bc03322516d726c459b7e5ddf6211384a" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.092555 4766 scope.go:117] "RemoveContainer" containerID="97eb96b855b10a22a6e46b822f4b71edbb3ba59805d7a1f85175cae2577f8939" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.119042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.132379 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64fdd96cfc-xc6hw"] Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.925101 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.978905 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.979335 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.979621 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") pod \"960be176-b983-4be1-90cc-05fdc39fb4e3\" (UID: \"960be176-b983-4be1-90cc-05fdc39fb4e3\") " Jan 30 17:54:43 crc kubenswrapper[4766]: I0130 17:54:43.984496 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58" (OuterVolumeSpecName: "kube-api-access-gvx58") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "kube-api-access-gvx58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.005320 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data" (OuterVolumeSpecName: "config-data") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.013539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960be176-b983-4be1-90cc-05fdc39fb4e3" (UID: "960be176-b983-4be1-90cc-05fdc39fb4e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.051748 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" path="/var/lib/kubelet/pods/df37c2c0-49c6-46b4-a4c9-085cad77c471/volumes" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077073 4766 generic.go:334] "Generic (PLEG): container finished" podID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" exitCode=0 Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerDied","Data":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077124 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077155 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960be176-b983-4be1-90cc-05fdc39fb4e3","Type":"ContainerDied","Data":"92dbd7b1b8a472aec7c8d9dd2722ad2e6ddf00a37ec9c45580a2afbf75ca87fa"} Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.077215 4766 scope.go:117] "RemoveContainer" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081653 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081674 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvx58\" (UniqueName: \"kubernetes.io/projected/960be176-b983-4be1-90cc-05fdc39fb4e3-kube-api-access-gvx58\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.081684 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960be176-b983-4be1-90cc-05fdc39fb4e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.107679 4766 scope.go:117] "RemoveContainer" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.108167 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": container with ID starting with 74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e not found: ID does not exist" containerID="74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.108221 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e"} err="failed to get container status \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": rpc error: code = NotFound desc = could not find container \"74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e\": container with ID starting with 74ec0f163618dbe2e221c599805812701caedb5c8a8888ec9f014fdebc3d808e not found: ID does not exist" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.108544 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.126550 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.136616 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137110 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137138 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137205 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="init" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137216 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="init" Jan 30 17:54:44 crc kubenswrapper[4766]: E0130 17:54:44.137241 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137251 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137471 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" containerName="nova-scheduler-scheduler" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.137514 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="df37c2c0-49c6-46b4-a4c9-085cad77c471" containerName="dnsmasq-dns" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.138447 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.145420 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.146184 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.286952 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.287051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.287449 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388763 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388837 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.388874 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.392998 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.394632 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.405281 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"nova-scheduler-0\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.457317 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:54:44 crc kubenswrapper[4766]: I0130 17:54:44.872598 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.089799 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerStarted","Data":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.089892 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerStarted","Data":"dbf668e645f6a44821ff790f6478d0f15ef68055392d35449de7aa0dcc2f94d1"} Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.115446 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.115420334 podStartE2EDuration="1.115420334s" podCreationTimestamp="2026-01-30 17:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:45.109830853 +0000 UTC m=+5539.747788199" watchObservedRunningTime="2026-01-30 17:54:45.115420334 +0000 UTC m=+5539.753377670" Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.454010 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:45 crc kubenswrapper[4766]: I0130 17:54:45.454157 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:54:46 crc kubenswrapper[4766]: I0130 17:54:46.053930 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960be176-b983-4be1-90cc-05fdc39fb4e3" path="/var/lib/kubelet/pods/960be176-b983-4be1-90cc-05fdc39fb4e3/volumes" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.418865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.949274 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.951497 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.954512 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.954749 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 17:54:48 crc kubenswrapper[4766]: I0130 17:54:48.975399 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.078805 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.078912 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.079132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.079243 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181036 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181104 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181146 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.181166 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.188856 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.198834 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.200247 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.202911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"nova-cell1-cell-mapping-nfnj2\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.277334 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.457477 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:54:49 crc kubenswrapper[4766]: I0130 17:54:49.721900 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 17:54:49 crc kubenswrapper[4766]: W0130 17:54:49.734487 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018ff185_8917_437b_9c5a_ec143d1fc84a.slice/crio-80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d WatchSource:0}: Error finding container 80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d: Status 404 returned error can't find the container with id 80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.137205 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerStarted","Data":"1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0"} Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.137739 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerStarted","Data":"80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d"} Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.159137 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nfnj2" podStartSLOduration=2.159112844 podStartE2EDuration="2.159112844s" podCreationTimestamp="2026-01-30 17:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:54:50.154119758 +0000 UTC m=+5544.792077104" watchObservedRunningTime="2026-01-30 17:54:50.159112844 +0000 UTC m=+5544.797070190" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.429796 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.429857 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.453207 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:54:50 crc kubenswrapper[4766]: I0130 17:54:50.453685 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.554458 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.554691 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.555008 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.62:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:51 crc kubenswrapper[4766]: I0130 17:54:51.555039 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.61:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:54:54 crc kubenswrapper[4766]: I0130 17:54:54.457735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:54:54 crc kubenswrapper[4766]: I0130 17:54:54.482738 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.177268 4766 generic.go:334] "Generic (PLEG): container finished" podID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerID="1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0" exitCode=0 Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.177341 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerDied","Data":"1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0"} Jan 30 17:54:55 crc kubenswrapper[4766]: I0130 17:54:55.206611 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.501285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538043 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538226 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.538371 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.539075 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") pod \"018ff185-8917-437b-9c5a-ec143d1fc84a\" (UID: \"018ff185-8917-437b-9c5a-ec143d1fc84a\") " Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.544360 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg" (OuterVolumeSpecName: "kube-api-access-b5xlg") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "kube-api-access-b5xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.545029 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts" (OuterVolumeSpecName: "scripts") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.571112 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data" (OuterVolumeSpecName: "config-data") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.590364 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018ff185-8917-437b-9c5a-ec143d1fc84a" (UID: "018ff185-8917-437b-9c5a-ec143d1fc84a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641347 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641384 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641400 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018ff185-8917-437b-9c5a-ec143d1fc84a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:56 crc kubenswrapper[4766]: I0130 17:54:56.641413 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5xlg\" (UniqueName: \"kubernetes.io/projected/018ff185-8917-437b-9c5a-ec143d1fc84a-kube-api-access-b5xlg\") on node \"crc\" DevicePath \"\"" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200597 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nfnj2" event={"ID":"018ff185-8917-437b-9c5a-ec143d1fc84a","Type":"ContainerDied","Data":"80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d"} Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200929 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c2641525f0b192d6bdd7054ff66d4a229f3df731ea25d56d68b4f67c258b3d" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.200651 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nfnj2" Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367124 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367712 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" containerID="cri-o://96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.367968 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" containerID="cri-o://4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.388142 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.388598 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" containerID="cri-o://762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443061 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443361 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" containerID="cri-o://24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" gracePeriod=30 Jan 30 17:54:57 crc kubenswrapper[4766]: I0130 17:54:57.443981 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" containerID="cri-o://8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" gracePeriod=30 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.209543 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" exitCode=143 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.209593 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.211406 4766 generic.go:334] "Generic (PLEG): container finished" podID="450c67ff-a16a-43cf-8852-663c4c0073af" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" exitCode=143 Jan 30 17:54:58 crc kubenswrapper[4766]: I0130 17:54:58.211428 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.459236 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.460984 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.463869 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:54:59 crc kubenswrapper[4766]: E0130 17:54:59.463920 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.137902 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.149760 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232829 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232891 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232947 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.232984 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233063 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233095 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233137 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") pod \"450c67ff-a16a-43cf-8852-663c4c0073af\" (UID: \"450c67ff-a16a-43cf-8852-663c4c0073af\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.233202 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") pod \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\" (UID: \"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.234607 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs" (OuterVolumeSpecName: "logs") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.235042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs" (OuterVolumeSpecName: "logs") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.241861 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9" (OuterVolumeSpecName: "kube-api-access-987z9") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "kube-api-access-987z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242033 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" exitCode=0 Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242060 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242251 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5","Type":"ContainerDied","Data":"0b18c0a6248e0f08e59e0f76327c26fc51a0ee3b357d761ada211d388f46fe36"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.242276 4766 scope.go:117] "RemoveContainer" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.244241 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r" (OuterVolumeSpecName: "kube-api-access-fhf5r") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "kube-api-access-fhf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.244784 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245479 4766 generic.go:334] "Generic (PLEG): container finished" podID="450c67ff-a16a-43cf-8852-663c4c0073af" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" exitCode=0 Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245601 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245719 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"450c67ff-a16a-43cf-8852-663c4c0073af","Type":"ContainerDied","Data":"64b6cff4081d91692275bfc39b9e03b4bbec983aeeaa2f9495ea140e691acbab"} Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.245689 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.263636 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data" (OuterVolumeSpecName: "config-data") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.263836 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.269375 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data" (OuterVolumeSpecName: "config-data") pod "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" (UID: "f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.277784 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "450c67ff-a16a-43cf-8852-663c4c0073af" (UID: "450c67ff-a16a-43cf-8852-663c4c0073af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.308059 4766 scope.go:117] "RemoveContainer" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.323843 4766 scope.go:117] "RemoveContainer" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.324409 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": container with ID starting with 8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891 not found: ID does not exist" containerID="8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324476 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891"} err="failed to get container status \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": rpc error: code = NotFound desc = could not find container \"8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891\": container with ID starting with 8bfdf6d3bbf636cc3091d8977591217cb762d90d47739cb908609c1bf7193891 not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324507 4766 scope.go:117] "RemoveContainer" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.324941 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": container with ID starting with 24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912 not found: ID does not exist" containerID="24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.324994 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912"} err="failed to get container status \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": rpc error: code = NotFound desc = could not find container \"24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912\": container with ID starting with 24761a58c39941a476bb25332d3444a08bebbe87402e559bd577cf7061d30912 not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.325029 4766 scope.go:117] "RemoveContainer" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335728 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335762 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhf5r\" (UniqueName: \"kubernetes.io/projected/450c67ff-a16a-43cf-8852-663c4c0073af-kube-api-access-fhf5r\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335772 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335782 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335790 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-987z9\" (UniqueName: \"kubernetes.io/projected/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-kube-api-access-987z9\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335798 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/450c67ff-a16a-43cf-8852-663c4c0073af-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335806 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.335814 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450c67ff-a16a-43cf-8852-663c4c0073af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.342376 4766 scope.go:117] "RemoveContainer" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.359714 4766 scope.go:117] "RemoveContainer" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.360113 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": container with ID starting with 4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f not found: ID does not exist" containerID="4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360152 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f"} err="failed to get container status \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": rpc error: code = NotFound desc = could not find container \"4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f\": container with ID starting with 4594d9a8d1377e41403f374bd14e980ff695fcce28e881434e8bad7eed7a1f0f not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360317 4766 scope.go:117] "RemoveContainer" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.360681 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": container with ID starting with 96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc not found: ID does not exist" containerID="96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.360732 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc"} err="failed to get container status \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": rpc error: code = NotFound desc = could not find container \"96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc\": container with ID starting with 96ff05c6824774ab3542ad515add8dc5231abeafd8be736ff82970a43f6e74cc not found: ID does not exist" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.599797 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.614359 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.676288 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.677747 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.677774 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.677811 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.677967 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678010 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678019 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678033 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678039 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: E0130 17:55:01.678056 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.678065 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688590 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-metadata" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688666 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" containerName="nova-manage" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688684 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-api" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688711 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" containerName="nova-api-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.688760 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" containerName="nova-metadata-log" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.694148 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.697435 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.708093 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.728139 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743382 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.743980 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.744009 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.744038 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.752223 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.754894 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.757674 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.762474 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.800047 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845446 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845632 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") pod \"682ac4fd-3610-40e1-8c35-8396cf9f5342\" (UID: \"682ac4fd-3610-40e1-8c35-8396cf9f5342\") " Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.845939 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846077 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846165 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846238 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846285 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846308 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.846370 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.847721 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.851402 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2" (OuterVolumeSpecName: "kube-api-access-8mbf2") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "kube-api-access-8mbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.852949 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.854782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.867573 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"nova-metadata-0\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " pod="openstack/nova-metadata-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.874709 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data" (OuterVolumeSpecName: "config-data") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.882885 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "682ac4fd-3610-40e1-8c35-8396cf9f5342" (UID: "682ac4fd-3610-40e1-8c35-8396cf9f5342"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.947902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.947985 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948080 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948108 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948153 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbf2\" (UniqueName: \"kubernetes.io/projected/682ac4fd-3610-40e1-8c35-8396cf9f5342-kube-api-access-8mbf2\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948165 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.948191 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682ac4fd-3610-40e1-8c35-8396cf9f5342-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.950034 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.952620 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.956495 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:01 crc kubenswrapper[4766]: I0130 17:55:01.965808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"nova-api-0\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " pod="openstack/nova-api-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.053693 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450c67ff-a16a-43cf-8852-663c4c0073af" path="/var/lib/kubelet/pods/450c67ff-a16a-43cf-8852-663c4c0073af/volumes" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.054531 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5" path="/var/lib/kubelet/pods/f4fb0fd7-f2d7-4b42-a4d3-4c7546a348d5/volumes" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.097736 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.128548 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264408 4766 generic.go:334] "Generic (PLEG): container finished" podID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" exitCode=0 Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264475 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerDied","Data":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264502 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"682ac4fd-3610-40e1-8c35-8396cf9f5342","Type":"ContainerDied","Data":"dbf668e645f6a44821ff790f6478d0f15ef68055392d35449de7aa0dcc2f94d1"} Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264521 4766 scope.go:117] "RemoveContainer" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.264535 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.289303 4766 scope.go:117] "RemoveContainer" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: E0130 17:55:02.289896 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": container with ID starting with 762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927 not found: ID does not exist" containerID="762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.289937 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927"} err="failed to get container status \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": rpc error: code = NotFound desc = could not find container \"762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927\": container with ID starting with 762ee95b18763ac4363056a859b53a7202c53d1b4f7202a690be49ba98fb2927 not found: ID does not exist" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.291351 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.305247 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320159 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: E0130 17:55:02.320646 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320662 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.320854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" containerName="nova-scheduler-scheduler" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.321609 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.324385 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.331127 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357554 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357915 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.357945 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.459916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.460027 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.460053 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.465216 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.465635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.475221 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"nova-scheduler-0\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.566232 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:02 crc kubenswrapper[4766]: W0130 17:55:02.570987 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0670fd5_b8de_408e_9cfa_b594e8e3aa84.slice/crio-79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04 WatchSource:0}: Error finding container 79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04: Status 404 returned error can't find the container with id 79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04 Jan 30 17:55:02 crc kubenswrapper[4766]: W0130 17:55:02.639928 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82bd49a0_efdc_46f1_95b8_a706be68208d.slice/crio-66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202 WatchSource:0}: Error finding container 66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202: Status 404 returned error can't find the container with id 66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202 Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.641379 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:55:02 crc kubenswrapper[4766]: I0130 17:55:02.641590 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.096596 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:03 crc kubenswrapper[4766]: W0130 17:55:03.099066 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf204102e_c8ed_4d40_b8c3_87c1921f66fb.slice/crio-6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11 WatchSource:0}: Error finding container 6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11: Status 404 returned error can't find the container with id 6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11 Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278192 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278496 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.278507 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerStarted","Data":"79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284503 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284557 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.284572 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerStarted","Data":"66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.287574 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerStarted","Data":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.287614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerStarted","Data":"6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11"} Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.302554 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302535711 podStartE2EDuration="2.302535711s" podCreationTimestamp="2026-01-30 17:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.296009343 +0000 UTC m=+5557.933966699" watchObservedRunningTime="2026-01-30 17:55:03.302535711 +0000 UTC m=+5557.940493047" Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.323071 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.323053438 podStartE2EDuration="2.323053438s" podCreationTimestamp="2026-01-30 17:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.314988719 +0000 UTC m=+5557.952946065" watchObservedRunningTime="2026-01-30 17:55:03.323053438 +0000 UTC m=+5557.961010784" Jan 30 17:55:03 crc kubenswrapper[4766]: I0130 17:55:03.338828 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.338807477 podStartE2EDuration="1.338807477s" podCreationTimestamp="2026-01-30 17:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:03.330814419 +0000 UTC m=+5557.968771755" watchObservedRunningTime="2026-01-30 17:55:03.338807477 +0000 UTC m=+5557.976764823" Jan 30 17:55:04 crc kubenswrapper[4766]: I0130 17:55:04.049198 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682ac4fd-3610-40e1-8c35-8396cf9f5342" path="/var/lib/kubelet/pods/682ac4fd-3610-40e1-8c35-8396cf9f5342/volumes" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.098826 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.099976 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:55:07 crc kubenswrapper[4766]: I0130 17:55:07.642315 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.098982 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.099876 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.129323 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.129382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.642226 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:55:12 crc kubenswrapper[4766]: I0130 17:55:12.674284 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.099739 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222375 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222705 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.222893 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:55:13 crc kubenswrapper[4766]: I0130 17:55:13.397611 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.100608 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.101244 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.104568 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.104722 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.137136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.138603 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.138948 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.148361 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.430199 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.437543 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.614721 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.617229 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.653844 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.734848 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.734999 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735048 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.735123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837454 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837596 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.837653 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838617 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.838925 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.839214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.859357 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"dnsmasq-dns-85bdb4454f-9zxvr\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:22 crc kubenswrapper[4766]: I0130 17:55:22.959938 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:23 crc kubenswrapper[4766]: I0130 17:55:23.521773 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451210 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerID="90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6" exitCode=0 Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451317 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6"} Jan 30 17:55:24 crc kubenswrapper[4766]: I0130 17:55:24.451687 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerStarted","Data":"eab82cb398525f14ced0104b7ca1271c77f56fe1657116a66a65ddcab59d73d5"} Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.462840 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerStarted","Data":"8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6"} Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.463149 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:25 crc kubenswrapper[4766]: I0130 17:55:25.496638 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podStartSLOduration=3.49662073 podStartE2EDuration="3.49662073s" podCreationTimestamp="2026-01-30 17:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:25.493617798 +0000 UTC m=+5580.131575164" watchObservedRunningTime="2026-01-30 17:55:25.49662073 +0000 UTC m=+5580.134578076" Jan 30 17:55:32 crc kubenswrapper[4766]: I0130 17:55:32.204391 4766 scope.go:117] "RemoveContainer" containerID="39cb977a0be995f7d392e56740fc2759cd94bc46c0c9536f717062f35b225716" Jan 30 17:55:32 crc kubenswrapper[4766]: I0130 17:55:32.962535 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.027943 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.028211 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" containerID="cri-o://33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" gracePeriod=10 Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.508639 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533497 4766 generic.go:334] "Generic (PLEG): container finished" podID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" exitCode=0 Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533556 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533589 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" event={"ID":"c15b6b4f-b273-4ad3-bd5b-c8c21421d672","Type":"ContainerDied","Data":"13c5060fcca39fb869c73e11390c606da85a656c1300d5ab6aa472270e9bf8ab"} Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533617 4766 scope.go:117] "RemoveContainer" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.533822 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c8b5f8b9-npmjq" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.559864 4766 scope.go:117] "RemoveContainer" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.580676 4766 scope.go:117] "RemoveContainer" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.581076 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": container with ID starting with 33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743 not found: ID does not exist" containerID="33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581109 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743"} err="failed to get container status \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": rpc error: code = NotFound desc = could not find container \"33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743\": container with ID starting with 33dd30f82fc48228e76ef54a0322f1648befca3fd844df31778e934e699e4743 not found: ID does not exist" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581134 4766 scope.go:117] "RemoveContainer" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.581523 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": container with ID starting with 48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639 not found: ID does not exist" containerID="48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.581548 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639"} err="failed to get container status \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": rpc error: code = NotFound desc = could not find container \"48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639\": container with ID starting with 48587e2732cd32e892ed02ac055074931d40c14169e13e67d1daec4e62c27639 not found: ID does not exist" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.653900 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.653945 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654072 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654099 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.654235 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") pod \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\" (UID: \"c15b6b4f-b273-4ad3-bd5b-c8c21421d672\") " Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.664397 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5" (OuterVolumeSpecName: "kube-api-access-5xqt5") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "kube-api-access-5xqt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.704259 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.705771 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.709078 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.726468 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config" (OuterVolumeSpecName: "config") pod "c15b6b4f-b273-4ad3-bd5b-c8c21421d672" (UID: "c15b6b4f-b273-4ad3-bd5b-c8c21421d672"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756359 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756391 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756404 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756414 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xqt5\" (UniqueName: \"kubernetes.io/projected/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-kube-api-access-5xqt5\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.756423 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15b6b4f-b273-4ad3-bd5b-c8c21421d672-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.871905 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: I0130 17:55:33.881299 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8c8b5f8b9-npmjq"] Jan 30 17:55:33 crc kubenswrapper[4766]: E0130 17:55:33.987737 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15b6b4f_b273_4ad3_bd5b_c8c21421d672.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.050145 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" path="/var/lib/kubelet/pods/c15b6b4f-b273-4ad3-bd5b-c8c21421d672/volumes" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970255 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:34 crc kubenswrapper[4766]: E0130 17:55:34.970637 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="init" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970648 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="init" Jan 30 17:55:34 crc kubenswrapper[4766]: E0130 17:55:34.970663 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970669 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.970854 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15b6b4f-b273-4ad3-bd5b-c8c21421d672" containerName="dnsmasq-dns" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.971478 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.984881 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.997877 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:34 crc kubenswrapper[4766]: I0130 17:55:34.999487 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.002196 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.023640 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078150 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078400 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.078632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.181313 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182121 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182202 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182345 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.182643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.183670 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.203635 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"cinder-b2b1-account-create-update-vjtsm\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.208044 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"cinder-db-create-h7zjx\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.288890 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.316670 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.743309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 17:55:35 crc kubenswrapper[4766]: I0130 17:55:35.817804 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.576994 4766 generic.go:334] "Generic (PLEG): container finished" podID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerID="9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d" exitCode=0 Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.577073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerDied","Data":"9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.577107 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerStarted","Data":"07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580577 4766 generic.go:334] "Generic (PLEG): container finished" podID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerID="d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963" exitCode=0 Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerDied","Data":"d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963"} Jan 30 17:55:36 crc kubenswrapper[4766]: I0130 17:55:36.580686 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerStarted","Data":"c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.008225 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.014394 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142024 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") pod \"912d4cef-a7f3-40a4-b498-f1da7361a15c\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142167 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") pod \"912d4cef-a7f3-40a4-b498-f1da7361a15c\" (UID: \"912d4cef-a7f3-40a4-b498-f1da7361a15c\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142219 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") pod \"3f3c8440-d3be-418a-a446-f3f592a864bd\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.142337 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") pod \"3f3c8440-d3be-418a-a446-f3f592a864bd\" (UID: \"3f3c8440-d3be-418a-a446-f3f592a864bd\") " Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.143085 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "912d4cef-a7f3-40a4-b498-f1da7361a15c" (UID: "912d4cef-a7f3-40a4-b498-f1da7361a15c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.143107 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f3c8440-d3be-418a-a446-f3f592a864bd" (UID: "3f3c8440-d3be-418a-a446-f3f592a864bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.149946 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l" (OuterVolumeSpecName: "kube-api-access-ssg6l") pod "3f3c8440-d3be-418a-a446-f3f592a864bd" (UID: "3f3c8440-d3be-418a-a446-f3f592a864bd"). InnerVolumeSpecName "kube-api-access-ssg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.150757 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh" (OuterVolumeSpecName: "kube-api-access-p5hdh") pod "912d4cef-a7f3-40a4-b498-f1da7361a15c" (UID: "912d4cef-a7f3-40a4-b498-f1da7361a15c"). InnerVolumeSpecName "kube-api-access-p5hdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244395 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5hdh\" (UniqueName: \"kubernetes.io/projected/912d4cef-a7f3-40a4-b498-f1da7361a15c-kube-api-access-p5hdh\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244456 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssg6l\" (UniqueName: \"kubernetes.io/projected/3f3c8440-d3be-418a-a446-f3f592a864bd-kube-api-access-ssg6l\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244468 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f3c8440-d3be-418a-a446-f3f592a864bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.244478 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912d4cef-a7f3-40a4-b498-f1da7361a15c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598153 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7zjx" event={"ID":"912d4cef-a7f3-40a4-b498-f1da7361a15c","Type":"ContainerDied","Data":"c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598204 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13cfae4971c2f3c308d2e0901f4b258d140239011d32194fef3bdbcf0a24355" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.598529 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7zjx" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599735 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b2b1-account-create-update-vjtsm" event={"ID":"3f3c8440-d3be-418a-a446-f3f592a864bd","Type":"ContainerDied","Data":"07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161"} Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599757 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d17721c40f3bc1a831170de6726f64da456addb58247ead7a703131e06d161" Jan 30 17:55:38 crc kubenswrapper[4766]: I0130 17:55:38.599779 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b2b1-account-create-update-vjtsm" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.326412 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:40 crc kubenswrapper[4766]: E0130 17:55:40.327009 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327021 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: E0130 17:55:40.327052 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327059 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327254 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" containerName="mariadb-database-create" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327281 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" containerName="mariadb-account-create-update" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.327853 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.330447 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.336228 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.336385 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zh4ls" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.347461 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382566 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382642 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382768 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382901 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.382972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.383016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484095 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484154 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484192 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484231 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484256 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484273 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.484409 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489332 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.489776 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.491103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.507509 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"cinder-db-sync-7fd4h\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:40 crc kubenswrapper[4766]: I0130 17:55:40.660089 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:41 crc kubenswrapper[4766]: I0130 17:55:41.100070 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 17:55:41 crc kubenswrapper[4766]: W0130 17:55:41.102978 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d92cbfe_71f2_4dc5_981b_0c52c1169a2d.slice/crio-011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38 WatchSource:0}: Error finding container 011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38: Status 404 returned error can't find the container with id 011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38 Jan 30 17:55:41 crc kubenswrapper[4766]: I0130 17:55:41.626666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerStarted","Data":"011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38"} Jan 30 17:55:42 crc kubenswrapper[4766]: I0130 17:55:42.635106 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerStarted","Data":"7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba"} Jan 30 17:55:42 crc kubenswrapper[4766]: I0130 17:55:42.660557 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7fd4h" podStartSLOduration=2.660539588 podStartE2EDuration="2.660539588s" podCreationTimestamp="2026-01-30 17:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:42.653295411 +0000 UTC m=+5597.291252767" watchObservedRunningTime="2026-01-30 17:55:42.660539588 +0000 UTC m=+5597.298496934" Jan 30 17:55:44 crc kubenswrapper[4766]: E0130 17:55:44.224730 4766 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d92cbfe_71f2_4dc5_981b_0c52c1169a2d.slice/crio-conmon-7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:55:44 crc kubenswrapper[4766]: I0130 17:55:44.669582 4766 generic.go:334] "Generic (PLEG): container finished" podID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerID="7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba" exitCode=0 Jan 30 17:55:44 crc kubenswrapper[4766]: I0130 17:55:44.669669 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerDied","Data":"7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba"} Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.037305 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096042 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096148 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096227 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096224 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096276 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096378 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") pod \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\" (UID: \"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d\") " Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.096774 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101739 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg" (OuterVolumeSpecName: "kube-api-access-ptlwg") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "kube-api-access-ptlwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts" (OuterVolumeSpecName: "scripts") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.101983 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.122691 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.141474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data" (OuterVolumeSpecName: "config-data") pod "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" (UID: "7d92cbfe-71f2-4dc5-981b-0c52c1169a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198745 4766 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198776 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198786 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptlwg\" (UniqueName: \"kubernetes.io/projected/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-kube-api-access-ptlwg\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198798 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.198806 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691436 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fd4h" event={"ID":"7d92cbfe-71f2-4dc5-981b-0c52c1169a2d","Type":"ContainerDied","Data":"011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38"} Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691486 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011d76fe082353b96cc970bb72dff4b8c55c1db17c40364128d01bf738df0e38" Jan 30 17:55:46 crc kubenswrapper[4766]: I0130 17:55:46.691563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fd4h" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.065603 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:47 crc kubenswrapper[4766]: E0130 17:55:47.066103 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.066119 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.066366 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" containerName="cinder-db-sync" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.067616 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.090452 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.116882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.116962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117093 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117123 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.117169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.202704 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.204240 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.207292 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.208014 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zh4ls" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.208033 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.211178 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218458 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218523 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.218671 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.219647 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-config\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.219671 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-dns-svc\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.220294 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-sb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.220358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2333655-ed62-419c-a0cc-04a4c9f36938-ovsdbserver-nb\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.226036 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.265142 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x2s\" (UniqueName: \"kubernetes.io/projected/c2333655-ed62-419c-a0cc-04a4c9f36938-kube-api-access-k5x2s\") pod \"dnsmasq-dns-8687c8cf7-7zxrr\" (UID: \"c2333655-ed62-419c-a0cc-04a4c9f36938\") " pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.320898 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.320976 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321002 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321146 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321280 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321327 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.321365 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.388731 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422929 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422955 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.422990 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423035 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423061 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423105 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.423769 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.427092 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.427896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.430789 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.438146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.448827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"cinder-api-0\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.526631 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:55:47 crc kubenswrapper[4766]: I0130 17:55:47.941181 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8687c8cf7-7zxrr"] Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.232253 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:55:48 crc kubenswrapper[4766]: W0130 17:55:48.268226 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae29236e_6325_4cee_99e8_45b5dbfdae9d.slice/crio-603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5 WatchSource:0}: Error finding container 603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5: Status 404 returned error can't find the container with id 603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5 Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724390 4766 generic.go:334] "Generic (PLEG): container finished" podID="c2333655-ed62-419c-a0cc-04a4c9f36938" containerID="5ca75ddc325a95514c6e15fdb6e4fc3a54b7c81eb0c7e459dacf544d3c7f63c0" exitCode=0 Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerDied","Data":"5ca75ddc325a95514c6e15fdb6e4fc3a54b7c81eb0c7e459dacf544d3c7f63c0"} Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.724545 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerStarted","Data":"fa1bd6f41a82e121deea0f18d4981f1f2d28b4f7c6dc486fddee74ee05ad0cb8"} Jan 30 17:55:48 crc kubenswrapper[4766]: I0130 17:55:48.729894 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.740753 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" event={"ID":"c2333655-ed62-419c-a0cc-04a4c9f36938","Type":"ContainerStarted","Data":"ed8c661ef47eb4ff1b1df085e3ffe9a1985ea919620d4430d8986d970f83d80c"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.741705 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745285 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerStarted","Data":"7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7"} Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.745434 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.765338 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" podStartSLOduration=2.765322754 podStartE2EDuration="2.765322754s" podCreationTimestamp="2026-01-30 17:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:49.763217947 +0000 UTC m=+5604.401175303" watchObservedRunningTime="2026-01-30 17:55:49.765322754 +0000 UTC m=+5604.403280100" Jan 30 17:55:49 crc kubenswrapper[4766]: I0130 17:55:49.782057 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.7820399780000002 podStartE2EDuration="2.782039978s" podCreationTimestamp="2026-01-30 17:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:55:49.77768644 +0000 UTC m=+5604.415643786" watchObservedRunningTime="2026-01-30 17:55:49.782039978 +0000 UTC m=+5604.419997324" Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.391413 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8687c8cf7-7zxrr" Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.461035 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.461465 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" containerID="cri-o://8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" gracePeriod=10 Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.877170 4766 generic.go:334] "Generic (PLEG): container finished" podID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerID="8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" exitCode=0 Jan 30 17:55:57 crc kubenswrapper[4766]: I0130 17:55:57.877253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6"} Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.077714 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.150790 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.150926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151021 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151118 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.151201 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") pod \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\" (UID: \"c5061d92-9c4a-4434-a5ff-32dcdd752ee7\") " Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.178095 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855" (OuterVolumeSpecName: "kube-api-access-d7855") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "kube-api-access-d7855". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.226727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.242843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config" (OuterVolumeSpecName: "config") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.248804 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.253825 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254092 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7855\" (UniqueName: \"kubernetes.io/projected/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-kube-api-access-d7855\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254220 4766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.254306 4766 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.255550 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5061d92-9c4a-4434-a5ff-32dcdd752ee7" (UID: "c5061d92-9c4a-4434-a5ff-32dcdd752ee7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.356555 4766 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5061d92-9c4a-4434-a5ff-32dcdd752ee7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889400 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" event={"ID":"c5061d92-9c4a-4434-a5ff-32dcdd752ee7","Type":"ContainerDied","Data":"eab82cb398525f14ced0104b7ca1271c77f56fe1657116a66a65ddcab59d73d5"} Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889672 4766 scope.go:117] "RemoveContainer" containerID="8ab9e20fe65596558ff546eec38b875f8a3ae64a2bfbdfcfc73bc1b504627cd6" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.889611 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.934219 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.934517 4766 scope.go:117] "RemoveContainer" containerID="90f36e10b94a3c5bc50fec38f23b2482936896584f12ca38c604afc3476596d6" Jan 30 17:55:58 crc kubenswrapper[4766]: I0130 17:55:58.943199 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85bdb4454f-9zxvr"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.318904 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.319255 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" containerID="cri-o://200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.319301 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" containerID="cri-o://34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.333082 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.333296 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" containerID="cri-o://86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.341449 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.341674 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.365639 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.365885 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.386527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.386793 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" containerID="cri-o://a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.387241 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" containerID="cri-o://f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" gracePeriod=30 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.753131 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.904469 4766 generic.go:334] "Generic (PLEG): container finished" podID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerID="587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" exitCode=0 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.904548 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerDied","Data":"587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308"} Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.914306 4766 generic.go:334] "Generic (PLEG): container finished" podID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerID="a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" exitCode=143 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.914389 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39"} Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.926078 4766 generic.go:334] "Generic (PLEG): container finished" podID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerID="200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" exitCode=143 Jan 30 17:55:59 crc kubenswrapper[4766]: I0130 17:55:59.926124 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d"} Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.054725 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" path="/var/lib/kubelet/pods/c5061d92-9c4a-4434-a5ff-32dcdd752ee7/volumes" Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.141260 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.144493 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.146034 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:00 crc kubenswrapper[4766]: E0130 17:56:00.146115 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.319279 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397352 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.397589 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") pod \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\" (UID: \"1f688a02-a337-43d9-9cc8-ca5d7ba19898\") " Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.405127 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v" (OuterVolumeSpecName: "kube-api-access-7lv8v") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "kube-api-access-7lv8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.447145 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.449311 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data" (OuterVolumeSpecName: "config-data") pod "1f688a02-a337-43d9-9cc8-ca5d7ba19898" (UID: "1f688a02-a337-43d9-9cc8-ca5d7ba19898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502524 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502560 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688a02-a337-43d9-9cc8-ca5d7ba19898-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.502574 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lv8v\" (UniqueName: \"kubernetes.io/projected/1f688a02-a337-43d9-9cc8-ca5d7ba19898-kube-api-access-7lv8v\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938444 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f688a02-a337-43d9-9cc8-ca5d7ba19898","Type":"ContainerDied","Data":"9cb907c7defc84de9011e676b2b253841c9ace45df34403f36c123319269cc8b"} Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938513 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:00 crc kubenswrapper[4766]: I0130 17:56:00.938774 4766 scope.go:117] "RemoveContainer" containerID="587a65d7acafa092b997b244d4f222dc6767a0e73e3ea386b5711720a3c42308" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.010254 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.074660 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.074735 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075089 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075101 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075110 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="init" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075115 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="init" Jan 30 17:56:01 crc kubenswrapper[4766]: E0130 17:56:01.075131 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075136 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075613 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.075632 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.076306 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.081135 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.082014 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114539 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114581 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.114667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.216622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.217399 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.217426 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.220704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.223911 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4aa9c5-4f42-495a-921f-986b170dafe4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.233896 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsq5r\" (UniqueName: \"kubernetes.io/projected/5d4aa9c5-4f42-495a-921f-986b170dafe4-kube-api-access-hsq5r\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d4aa9c5-4f42-495a-921f-986b170dafe4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.399868 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.596215 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623568 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623683 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.623716 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") pod \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\" (UID: \"f204102e-c8ed-4d40-b8c3-87c1921f66fb\") " Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.642565 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb" (OuterVolumeSpecName: "kube-api-access-nrcnb") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "kube-api-access-nrcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.664336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data" (OuterVolumeSpecName: "config-data") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.685974 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f204102e-c8ed-4d40-b8c3-87c1921f66fb" (UID: "f204102e-c8ed-4d40-b8c3-87c1921f66fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726338 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726376 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f204102e-c8ed-4d40-b8c3-87c1921f66fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.726390 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrcnb\" (UniqueName: \"kubernetes.io/projected/f204102e-c8ed-4d40-b8c3-87c1921f66fb-kube-api-access-nrcnb\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954554 4766 generic.go:334] "Generic (PLEG): container finished" podID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" exitCode=0 Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954600 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerDied","Data":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954615 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954635 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f204102e-c8ed-4d40-b8c3-87c1921f66fb","Type":"ContainerDied","Data":"6e814b2c7e1b2d9913b671b1270737b16334d9fda854ba42eb91f70d84e1ec11"} Jan 30 17:56:01 crc kubenswrapper[4766]: I0130 17:56:01.954686 4766 scope.go:117] "RemoveContainer" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.001382 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.007333 4766 scope.go:117] "RemoveContainer" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.021708 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: E0130 17:56:02.022498 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": container with ID starting with 86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd not found: ID does not exist" containerID="86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.022554 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd"} err="failed to get container status \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": rpc error: code = NotFound desc = could not find container \"86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd\": container with ID starting with 86cf6939663ece5ac286102f34cb70f143bd14f042d754439fd56045814822dd not found: ID does not exist" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.031229 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.054259 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f688a02-a337-43d9-9cc8-ca5d7ba19898" path="/var/lib/kubelet/pods/1f688a02-a337-43d9-9cc8-ca5d7ba19898/volumes" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.055404 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" path="/var/lib/kubelet/pods/f204102e-c8ed-4d40-b8c3-87c1921f66fb/volumes" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056154 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: E0130 17:56:02.056676 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056731 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.056940 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f204102e-c8ed-4d40-b8c3-87c1921f66fb" containerName="nova-scheduler-scheduler" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.057655 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.057735 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.061087 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137515 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137593 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.137623 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239119 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239201 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.239220 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.244257 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-config-data\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.245568 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782b2122-c6f0-424d-85b1-efb911f37e20-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.276457 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82bq\" (UniqueName: \"kubernetes.io/projected/782b2122-c6f0-424d-85b1-efb911f37e20-kube-api-access-t82bq\") pod \"nova-scheduler-0\" (UID: \"782b2122-c6f0-424d-85b1-efb911f37e20\") " pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.379857 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.539267 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": read tcp 10.217.0.2:58116->10.217.1.66:8774: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.540360 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.66:8774/\": read tcp 10.217.0.2:58118->10.217.1.66:8774: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.585368 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.585587 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" gracePeriod=30 Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.726012 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": read tcp 10.217.0.2:50018->10.217.1.65:8775: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.726026 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": read tcp 10.217.0.2:50004->10.217.1.65:8775: read: connection reset by peer" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.862873 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:56:02 crc kubenswrapper[4766]: W0130 17:56:02.873390 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod782b2122_c6f0_424d_85b1_efb911f37e20.slice/crio-cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a WatchSource:0}: Error finding container cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a: Status 404 returned error can't find the container with id cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.965452 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85bdb4454f-9zxvr" podUID="c5061d92-9c4a-4434-a5ff-32dcdd752ee7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.68:5353: i/o timeout" Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.972833 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"782b2122-c6f0-424d-85b1-efb911f37e20","Type":"ContainerStarted","Data":"cdf3aaa53a6e507a0e3af566f880e4127a11d4a4df4b929ce2242ea377c6f60a"} Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.982736 4766 generic.go:334] "Generic (PLEG): container finished" podID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerID="34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" exitCode=0 Jan 30 17:56:02 crc kubenswrapper[4766]: I0130 17:56:02.982820 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.010204 4766 generic.go:334] "Generic (PLEG): container finished" podID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerID="f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" exitCode=0 Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.010332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.013661 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d4aa9c5-4f42-495a-921f-986b170dafe4","Type":"ContainerStarted","Data":"25018228b02ea207d81542655aa9b32ef3784522ec69ac31eb4ff676b85b705b"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.013718 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d4aa9c5-4f42-495a-921f-986b170dafe4","Type":"ContainerStarted","Data":"c7806c8fea73e7b8121b46870f9955dd9e4a2c8319903d9c09f3b36c64d06acc"} Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.018441 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.046122 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.045907379 podStartE2EDuration="3.045907379s" podCreationTimestamp="2026-01-30 17:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:03.033241705 +0000 UTC m=+5617.671199051" watchObservedRunningTime="2026-01-30 17:56:03.045907379 +0000 UTC m=+5617.683864725" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.059881 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.059991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.060044 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.060068 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") pod \"82bd49a0-efdc-46f1-95b8-a706be68208d\" (UID: \"82bd49a0-efdc-46f1-95b8-a706be68208d\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.061791 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs" (OuterVolumeSpecName: "logs") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.084430 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj" (OuterVolumeSpecName: "kube-api-access-h8tgj") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "kube-api-access-h8tgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.128123 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.131080 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data" (OuterVolumeSpecName: "config-data") pod "82bd49a0-efdc-46f1-95b8-a706be68208d" (UID: "82bd49a0-efdc-46f1-95b8-a706be68208d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167672 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82bd49a0-efdc-46f1-95b8-a706be68208d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167713 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167726 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tgj\" (UniqueName: \"kubernetes.io/projected/82bd49a0-efdc-46f1-95b8-a706be68208d-kube-api-access-h8tgj\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.167737 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bd49a0-efdc-46f1-95b8-a706be68208d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.203434 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.268603 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.268906 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269008 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269221 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") pod \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\" (UID: \"d0670fd5-b8de-408e-9cfa-b594e8e3aa84\") " Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.269812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs" (OuterVolumeSpecName: "logs") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.271396 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.275369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns" (OuterVolumeSpecName: "kube-api-access-92bns") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "kube-api-access-92bns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.335157 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data" (OuterVolumeSpecName: "config-data") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.338498 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0670fd5-b8de-408e-9cfa-b594e8e3aa84" (UID: "d0670fd5-b8de-408e-9cfa-b594e8e3aa84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374094 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374125 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: I0130 17:56:03.374136 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bns\" (UniqueName: \"kubernetes.io/projected/d0670fd5-b8de-408e-9cfa-b594e8e3aa84-kube-api-access-92bns\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.401557 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.412073 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.417967 4766 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:56:03 crc kubenswrapper[4766]: E0130 17:56:03.418276 4766 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.028191 4766 generic.go:334] "Generic (PLEG): container finished" podID="c6725384-f878-416e-832e-64ea63dc6698" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" exitCode=0 Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.028369 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerDied","Data":"c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.031458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"82bd49a0-efdc-46f1-95b8-a706be68208d","Type":"ContainerDied","Data":"66dc3da390f241d612fa55fe27e56687a1e8882de35f533a122e60bb3d2e3202"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.031538 4766 scope.go:117] "RemoveContainer" containerID="f7a15f090c543f159f64b81fc90febf534407d29f511b8ad8202cf69378c21f4" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.032194 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.049292 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.054745 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"782b2122-c6f0-424d-85b1-efb911f37e20","Type":"ContainerStarted","Data":"3f1f30daaa1e0931fb7ea855dc99e864ca970d09d174b3686e8c7026c65b948f"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.054936 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0670fd5-b8de-408e-9cfa-b594e8e3aa84","Type":"ContainerDied","Data":"79c9df2100d6bd4132153d14d3ae6f09c3f6598da8bf5ede5fb0e766b11c0c04"} Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.077696 4766 scope.go:117] "RemoveContainer" containerID="a0d7c7e6d2cb5633e8a0b4e0bc52406e3e7faf95042bec5169821f0c2ab91d39" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.093517 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.09347907 podStartE2EDuration="3.09347907s" podCreationTimestamp="2026-01-30 17:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:04.070622789 +0000 UTC m=+5618.708580135" watchObservedRunningTime="2026-01-30 17:56:04.09347907 +0000 UTC m=+5618.731436416" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.159292 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.176162 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.185555 4766 scope.go:117] "RemoveContainer" containerID="34892f0d77a4bfb5e47c1f7f0fc93f06bb57eddf06d58f3f97423ed2b6e202d3" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.190653 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191145 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191170 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191211 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191219 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191234 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191243 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: E0130 17:56:04.191256 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191264 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191471 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-api" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191484 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191494 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" containerName="nova-api-log" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.191505 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" containerName="nova-metadata-metadata" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.192886 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.207264 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.211547 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.221217 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.243230 4766 scope.go:117] "RemoveContainer" containerID="200bcd264043dcad571b98db0257dd6c2f6205e9a8442561bca96aee3f006c3d" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.255288 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.268919 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.270665 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.276531 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.286475 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299357 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299381 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.299408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405374 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405530 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405622 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405642 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.405668 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.406151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af618003-f485-4daa-bedb-d1408b4547bb-logs\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.413266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-config-data\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.413934 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af618003-f485-4daa-bedb-d1408b4547bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.428861 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426pd\" (UniqueName: \"kubernetes.io/projected/af618003-f485-4daa-bedb-d1408b4547bb-kube-api-access-426pd\") pod \"nova-api-0\" (UID: \"af618003-f485-4daa-bedb-d1408b4547bb\") " pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.492788 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506814 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506941 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.506980 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") pod \"c6725384-f878-416e-832e-64ea63dc6698\" (UID: \"c6725384-f878-416e-832e-64ea63dc6698\") " Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507227 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507255 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507337 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.507352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.508824 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/374fa21e-428d-4383-9124-5272df0552d4-logs\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.511574 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.512380 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374fa21e-428d-4383-9124-5272df0552d4-config-data\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.512920 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2" (OuterVolumeSpecName: "kube-api-access-6kwb2") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "kube-api-access-6kwb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.539986 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.542539 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.545490 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2s4\" (UniqueName: \"kubernetes.io/projected/374fa21e-428d-4383-9124-5272df0552d4-kube-api-access-dx2s4\") pod \"nova-metadata-0\" (UID: \"374fa21e-428d-4383-9124-5272df0552d4\") " pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.588378 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data" (OuterVolumeSpecName: "config-data") pod "c6725384-f878-416e-832e-64ea63dc6698" (UID: "c6725384-f878-416e-832e-64ea63dc6698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.598378 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608750 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kwb2\" (UniqueName: \"kubernetes.io/projected/c6725384-f878-416e-832e-64ea63dc6698-kube-api-access-6kwb2\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608819 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:04 crc kubenswrapper[4766]: I0130 17:56:04.608829 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6725384-f878-416e-832e-64ea63dc6698-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.034826 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.056726 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"582f8309a7ee5444494d0cee309368d965f0b1605401c50bae8f9becb98ea8cf"} Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058111 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c6725384-f878-416e-832e-64ea63dc6698","Type":"ContainerDied","Data":"04dfddcb65778a7ed5dd4fe1da7afcca1ade4d7f0563c40559bc94e19e6acdc2"} Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058146 4766 scope.go:117] "RemoveContainer" containerID="c2aeea8ee2f173823cfcba5d88e64c5feb602801106b43496eaa109ece4c74aa" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.058250 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.099514 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.115384 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.125583 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: E0130 17:56:05.126104 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.126122 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.126632 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6725384-f878-416e-832e-64ea63dc6698" containerName="nova-cell0-conductor-conductor" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.127444 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.133010 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.135987 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.180743 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321198 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321287 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.321871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423786 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423869 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.423902 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.430862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.430877 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.444621 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmp2x\" (UniqueName: \"kubernetes.io/projected/463fa20b-ef02-4b0a-ae8e-3fed6dc02c37-kube-api-access-rmp2x\") pod \"nova-cell0-conductor-0\" (UID: \"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.470846 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:05 crc kubenswrapper[4766]: I0130 17:56:05.952490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:56:05 crc kubenswrapper[4766]: W0130 17:56:05.958623 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod463fa20b_ef02_4b0a_ae8e_3fed6dc02c37.slice/crio-dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b WatchSource:0}: Error finding container dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b: Status 404 returned error can't find the container with id dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.065840 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bd49a0-efdc-46f1-95b8-a706be68208d" path="/var/lib/kubelet/pods/82bd49a0-efdc-46f1-95b8-a706be68208d/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.066463 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6725384-f878-416e-832e-64ea63dc6698" path="/var/lib/kubelet/pods/c6725384-f878-416e-832e-64ea63dc6698/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.066985 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0670fd5-b8de-408e-9cfa-b594e8e3aa84" path="/var/lib/kubelet/pods/d0670fd5-b8de-408e-9cfa-b594e8e3aa84/volumes" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.078614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37","Type":"ContainerStarted","Data":"dc02ab14c8c15463c9f164f6bca8410d544fd0d3db20728364752bb7f512008b"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081702 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"b255fb203861cf38aece6b9f19759ab6362bcc74d07753d836d503a0f0531810"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081756 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"ea2df8a4ca0b63725bb799c30ae8cd374fcac1fed842b7154565bcd302c5ab2b"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.081770 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"374fa21e-428d-4383-9124-5272df0552d4","Type":"ContainerStarted","Data":"39dd9d346cb8093b57a4b81986998af18fb6480cbee4c8d238152e5b9603eba8"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.084248 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"dbd2f7fdf9f744ecda2014b7d25bc692be44ca5a2049413c28d896553b81626d"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.084288 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af618003-f485-4daa-bedb-d1408b4547bb","Type":"ContainerStarted","Data":"f7d9b9f0781c098e45f79bc0549e0721fc8c2f87f4c435181fa68f3e690a10fb"} Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.140327 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.14030674 podStartE2EDuration="2.14030674s" podCreationTimestamp="2026-01-30 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:06.132528189 +0000 UTC m=+5620.770485545" watchObservedRunningTime="2026-01-30 17:56:06.14030674 +0000 UTC m=+5620.778264086" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.158907 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.158890305 podStartE2EDuration="2.158890305s" podCreationTimestamp="2026-01-30 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:06.157034765 +0000 UTC m=+5620.794992111" watchObservedRunningTime="2026-01-30 17:56:06.158890305 +0000 UTC m=+5620.796847651" Jan 30 17:56:06 crc kubenswrapper[4766]: I0130 17:56:06.401128 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.095603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"463fa20b-ef02-4b0a-ae8e-3fed6dc02c37","Type":"ContainerStarted","Data":"148fa7325166aabadf12d512e159985b0672ebc805e033bb178eeafce376f3b6"} Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.119600 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.119580455 podStartE2EDuration="2.119580455s" podCreationTimestamp="2026-01-30 17:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:07.109019068 +0000 UTC m=+5621.746976414" watchObservedRunningTime="2026-01-30 17:56:07.119580455 +0000 UTC m=+5621.757537801" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.393217 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.694563 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871075 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.871301 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") pod \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\" (UID: \"42ca03b3-7414-49ac-8fb1-7d2489d1c251\") " Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.876644 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq" (OuterVolumeSpecName: "kube-api-access-qz5gq") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "kube-api-access-qz5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.896220 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.899577 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data" (OuterVolumeSpecName: "config-data") pod "42ca03b3-7414-49ac-8fb1-7d2489d1c251" (UID: "42ca03b3-7414-49ac-8fb1-7d2489d1c251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973196 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz5gq\" (UniqueName: \"kubernetes.io/projected/42ca03b3-7414-49ac-8fb1-7d2489d1c251-kube-api-access-qz5gq\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973244 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:07 crc kubenswrapper[4766]: I0130 17:56:07.973255 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ca03b3-7414-49ac-8fb1-7d2489d1c251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.104411 4766 generic.go:334] "Generic (PLEG): container finished" podID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" exitCode=0 Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105191 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105637 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerDied","Data":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105677 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"42ca03b3-7414-49ac-8fb1-7d2489d1c251","Type":"ContainerDied","Data":"18d42518db1b0bb06251f082044f954d0b9d14d82dbcc6772e7d16a38b44879b"} Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.105694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.106035 4766 scope.go:117] "RemoveContainer" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.132057 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.147634 4766 scope.go:117] "RemoveContainer" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: E0130 17:56:08.149473 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": container with ID starting with 4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5 not found: ID does not exist" containerID="4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.149516 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5"} err="failed to get container status \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": rpc error: code = NotFound desc = could not find container \"4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5\": container with ID starting with 4d171e1c1b30aea9ede9ee62f7dba8a4c3e8962d723b023c388dffbcb48561c5 not found: ID does not exist" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.156980 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.169384 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: E0130 17:56:08.169790 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.169806 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.170016 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" containerName="nova-cell1-conductor-conductor" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.170640 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.173604 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.185483 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278030 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278147 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.278497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.381899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.382083 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.382266 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.388849 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.401712 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.402131 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vdq\" (UniqueName: \"kubernetes.io/projected/b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e-kube-api-access-h9vdq\") pod \"nova-cell1-conductor-0\" (UID: \"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.493498 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:08 crc kubenswrapper[4766]: I0130 17:56:08.933321 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:56:08 crc kubenswrapper[4766]: W0130 17:56:08.936991 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4061a48_dd7c_4b2f_aa8b_422eb8f65c1e.slice/crio-01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7 WatchSource:0}: Error finding container 01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7: Status 404 returned error can't find the container with id 01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7 Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.045802 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.046160 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.114381 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e","Type":"ContainerStarted","Data":"01e3823cb67cae187a32f2df5a42e36b409fe11085bbeb1f4ac66c22f9b339f7"} Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.599153 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:56:09 crc kubenswrapper[4766]: I0130 17:56:09.599221 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.049616 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ca03b3-7414-49ac-8fb1-7d2489d1c251" path="/var/lib/kubelet/pods/42ca03b3-7414-49ac-8fb1-7d2489d1c251/volumes" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.127015 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e","Type":"ContainerStarted","Data":"f0ce49675782d93734e1bb3bec7969fd7a37f6cf9d4f90c57370abf0dc245664"} Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.127862 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:10 crc kubenswrapper[4766]: I0130 17:56:10.147074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.147057327 podStartE2EDuration="2.147057327s" podCreationTimestamp="2026-01-30 17:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:10.146287366 +0000 UTC m=+5624.784244732" watchObservedRunningTime="2026-01-30 17:56:10.147057327 +0000 UTC m=+5624.785014673" Jan 30 17:56:11 crc kubenswrapper[4766]: I0130 17:56:11.401565 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:11 crc kubenswrapper[4766]: I0130 17:56:11.412077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.155641 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.380077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:56:12 crc kubenswrapper[4766]: I0130 17:56:12.404696 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:56:13 crc kubenswrapper[4766]: I0130 17:56:13.176527 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.540873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.541237 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.599873 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:56:14 crc kubenswrapper[4766]: I0130 17:56:14.599923 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.500762 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.623443 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af618003-f485-4daa-bedb-d1408b4547bb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.623443 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af618003-f485-4daa-bedb-d1408b4547bb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.705798 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="374fa21e-428d-4383-9124-5272df0552d4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:15 crc kubenswrapper[4766]: I0130 17:56:15.706266 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="374fa21e-428d-4383-9124-5272df0552d4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.77:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.116254 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.118061 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.123876 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.135633 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162402 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162485 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162625 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162731 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.162903 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264396 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264441 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264530 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.264550 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.265539 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.270151 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.270407 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.272503 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.273168 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.284340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"cinder-scheduler-0\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.445749 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:17 crc kubenswrapper[4766]: I0130 17:56:17.914639 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.189311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"bd8c344070ab05d750472242fd65de8a04e107ddf96ae138a125614afab2f3d2"} Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.524550 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.745332 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.745943 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" containerID="cri-o://7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" gracePeriod=30 Jan 30 17:56:18 crc kubenswrapper[4766]: I0130 17:56:18.746443 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" containerID="cri-o://c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" gracePeriod=30 Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.198613 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerID="7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" exitCode=143 Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.198702 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.200405 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.200450 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerStarted","Data":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.230663 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.230640415 podStartE2EDuration="2.230640415s" podCreationTimestamp="2026-01-30 17:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:19.219831771 +0000 UTC m=+5633.857789117" watchObservedRunningTime="2026-01-30 17:56:19.230640415 +0000 UTC m=+5633.868597781" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.374436 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.376910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.384317 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.415459 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508277 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508338 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508380 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508420 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508444 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508465 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508489 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508547 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508577 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508597 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508632 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508670 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508695 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508774 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.508814 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610576 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610651 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610693 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610733 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610778 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610804 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610822 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610839 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610856 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.610933 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611022 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611081 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611160 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611213 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.611236 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-run\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.613579 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.613865 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614449 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614477 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614553 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614634 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.614664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cf1121a2-7545-40c9-9280-9337e94554d9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.617389 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.617459 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.619933 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.620359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.620560 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1121a2-7545-40c9-9280-9337e94554d9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.629798 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5985v\" (UniqueName: \"kubernetes.io/projected/cf1121a2-7545-40c9-9280-9337e94554d9-kube-api-access-5985v\") pod \"cinder-volume-volume1-0\" (UID: \"cf1121a2-7545-40c9-9280-9337e94554d9\") " pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.719087 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.978594 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.980760 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.983623 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 17:56:19 crc kubenswrapper[4766]: I0130 17:56:19.991902 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.087086 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.115376 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122536 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122598 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122693 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122734 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122748 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122766 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.122821 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123115 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123227 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123288 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123406 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123462 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123501 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.123539 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.209732 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"146b4e6fed4059a7b0f7215f8e3b9261dd1e347a79471ba7bda8a46a110ffac7"} Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225428 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225448 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225466 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225542 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225694 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225774 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225792 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.225782 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-lib-modules\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226049 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226076 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-run\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226132 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226233 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-sys\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226275 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226302 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226304 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226335 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226482 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226578 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226665 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226731 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.226870 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1a4ab9dd-be94-4701-a0ba-55dde27e9543-dev\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.234000 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-scripts\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.235078 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.235519 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-ceph\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.236956 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.237918 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4ab9dd-be94-4701-a0ba-55dde27e9543-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.250873 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cnl\" (UniqueName: \"kubernetes.io/projected/1a4ab9dd-be94-4701-a0ba-55dde27e9543-kube-api-access-j6cnl\") pod \"cinder-backup-0\" (UID: \"1a4ab9dd-be94-4701-a0ba-55dde27e9543\") " pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.312638 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 17:56:20 crc kubenswrapper[4766]: I0130 17:56:20.867308 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 17:56:20 crc kubenswrapper[4766]: W0130 17:56:20.877956 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4ab9dd_be94_4701_a0ba_55dde27e9543.slice/crio-96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98 WatchSource:0}: Error finding container 96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98: Status 404 returned error can't find the container with id 96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98 Jan 30 17:56:21 crc kubenswrapper[4766]: I0130 17:56:21.219969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"96a2de712568508cf1d1c68114b04d443a069839b11b2c111dcca205ca08ea98"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.235618 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"42bfdb3153947fd27f29e304c5840adc4afcdad64331539bae96d661874821c3"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.236135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"cf1121a2-7545-40c9-9280-9337e94554d9","Type":"ContainerStarted","Data":"37e3f83e319f5434e415c94d5864b501a939dc7bff638335a31077f6430d92fe"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.238033 4766 generic.go:334] "Generic (PLEG): container finished" podID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerID="c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" exitCode=0 Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.238094 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.242238 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"ea4f8155551129bdd1136314348feddd5496f037a509057432b02004d45ec400"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.242304 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1a4ab9dd-be94-4701-a0ba-55dde27e9543","Type":"ContainerStarted","Data":"f287689326ec338c74496105b1c0ac7e73fb399b0d9513755b69ccd5bc6ccda5"} Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.295562 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.149010573 podStartE2EDuration="3.295543194s" podCreationTimestamp="2026-01-30 17:56:19 +0000 UTC" firstStartedPulling="2026-01-30 17:56:20.115076782 +0000 UTC m=+5634.753034128" lastFinishedPulling="2026-01-30 17:56:21.261609403 +0000 UTC m=+5635.899566749" observedRunningTime="2026-01-30 17:56:22.286072426 +0000 UTC m=+5636.924029792" watchObservedRunningTime="2026-01-30 17:56:22.295543194 +0000 UTC m=+5636.933500540" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.314268 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.480187822 podStartE2EDuration="3.314250091s" podCreationTimestamp="2026-01-30 17:56:19 +0000 UTC" firstStartedPulling="2026-01-30 17:56:20.88141375 +0000 UTC m=+5635.519371096" lastFinishedPulling="2026-01-30 17:56:21.715476019 +0000 UTC m=+5636.353433365" observedRunningTime="2026-01-30 17:56:22.306505542 +0000 UTC m=+5636.944462888" watchObservedRunningTime="2026-01-30 17:56:22.314250091 +0000 UTC m=+5636.952207427" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.323190 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385513 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385883 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385926 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.385959 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386049 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386066 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") pod \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\" (UID: \"ae29236e-6325-4cee-99e8-45b5dbfdae9d\") " Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.386419 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.387719 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs" (OuterVolumeSpecName: "logs") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.393645 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b" (OuterVolumeSpecName: "kube-api-access-z5c4b") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "kube-api-access-z5c4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.397542 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts" (OuterVolumeSpecName: "scripts") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.400924 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.436090 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.448693 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.458251 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data" (OuterVolumeSpecName: "config-data") pod "ae29236e-6325-4cee-99e8-45b5dbfdae9d" (UID: "ae29236e-6325-4cee-99e8-45b5dbfdae9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.487975 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488379 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488394 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488403 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae29236e-6325-4cee-99e8-45b5dbfdae9d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488414 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae29236e-6325-4cee-99e8-45b5dbfdae9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488422 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae29236e-6325-4cee-99e8-45b5dbfdae9d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:22 crc kubenswrapper[4766]: I0130 17:56:22.488429 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5c4b\" (UniqueName: \"kubernetes.io/projected/ae29236e-6325-4cee-99e8-45b5dbfdae9d-kube-api-access-z5c4b\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.258897 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.258964 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae29236e-6325-4cee-99e8-45b5dbfdae9d","Type":"ContainerDied","Data":"603dadae66a61f77a7416e142f56c189014aab67ac07047867138dbd5a061aa5"} Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.259111 4766 scope.go:117] "RemoveContainer" containerID="c08867f77925f297d8364ac04af980e47fe8184765c9411990b3db0e28b7c360" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.316511 4766 scope.go:117] "RemoveContainer" containerID="7d3561f611119703905071b1a184200e4fd9b43325527e17a71ec76489c683e7" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.333427 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.344659 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367286 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: E0130 17:56:23.367777 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367812 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: E0130 17:56:23.367845 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.367855 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.368649 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.368685 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" containerName="cinder-api-log" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.369840 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.371997 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.379473 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418121 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418297 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418350 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418390 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418532 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418783 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.418823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521021 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521378 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521439 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521510 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521537 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521563 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.521586 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.522714 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a81891-2796-4952-bf9e-9a9f83668e34-logs\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.524972 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9a81891-2796-4952-bf9e-9a9f83668e34-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.530997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.530997 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.537264 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-scripts\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.539120 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25rt\" (UniqueName: \"kubernetes.io/projected/e9a81891-2796-4952-bf9e-9a9f83668e34-kube-api-access-s25rt\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.540300 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a81891-2796-4952-bf9e-9a9f83668e34-config-data\") pod \"cinder-api-0\" (UID: \"e9a81891-2796-4952-bf9e-9a9f83668e34\") " pod="openstack/cinder-api-0" Jan 30 17:56:23 crc kubenswrapper[4766]: I0130 17:56:23.685983 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.050688 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae29236e-6325-4cee-99e8-45b5dbfdae9d" path="/var/lib/kubelet/pods/ae29236e-6325-4cee-99e8-45b5dbfdae9d/volumes" Jan 30 17:56:24 crc kubenswrapper[4766]: W0130 17:56:24.139243 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a81891_2796_4952_bf9e_9a9f83668e34.slice/crio-c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c WatchSource:0}: Error finding container c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c: Status 404 returned error can't find the container with id c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.155705 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.270464 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"c69488c4c10198d030beb4df329e9bcb690f401cb0a8192d87870bcbeaa7c95c"} Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.544744 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.546121 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.546193 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.550706 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.602375 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.602462 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.605821 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.606163 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:56:24 crc kubenswrapper[4766]: I0130 17:56:24.719541 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.282734 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"2192d8d5a852ee6ed3f6476d054fb4f52aff5f1be451d555a155c7c49dad9876"} Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.283090 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.286327 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:56:25 crc kubenswrapper[4766]: I0130 17:56:25.314586 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 17:56:26 crc kubenswrapper[4766]: I0130 17:56:26.293957 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9a81891-2796-4952-bf9e-9a9f83668e34","Type":"ContainerStarted","Data":"b5b53a4ca11ebcd9760615d00e06b7cf70cec663cd6c22fc2f2a84d6c6a88377"} Jan 30 17:56:26 crc kubenswrapper[4766]: I0130 17:56:26.321136 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.3211141619999998 podStartE2EDuration="3.321114162s" podCreationTimestamp="2026-01-30 17:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:26.31738588 +0000 UTC m=+5640.955343256" watchObservedRunningTime="2026-01-30 17:56:26.321114162 +0000 UTC m=+5640.959071508" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.302201 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.661786 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:56:27 crc kubenswrapper[4766]: I0130 17:56:27.712097 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:28 crc kubenswrapper[4766]: I0130 17:56:28.310254 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" containerID="cri-o://8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" gracePeriod=30 Jan 30 17:56:28 crc kubenswrapper[4766]: I0130 17:56:28.310210 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" containerID="cri-o://4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" gracePeriod=30 Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.320918 4766 generic.go:334] "Generic (PLEG): container finished" podID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" exitCode=0 Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.320969 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} Jan 30 17:56:29 crc kubenswrapper[4766]: I0130 17:56:29.914377 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.528903 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.837617 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884760 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884822 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884861 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884885 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.884978 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885006 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885065 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") pod \"e39e170e-2256-4796-a06f-b1e63a1425cb\" (UID: \"e39e170e-2256-4796-a06f-b1e63a1425cb\") " Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.885573 4766 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e170e-2256-4796-a06f-b1e63a1425cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.897448 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.897510 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t" (OuterVolumeSpecName: "kube-api-access-kgg4t") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "kube-api-access-kgg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.898627 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts" (OuterVolumeSpecName: "scripts") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.935664 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987484 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987524 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgg4t\" (UniqueName: \"kubernetes.io/projected/e39e170e-2256-4796-a06f-b1e63a1425cb-kube-api-access-kgg4t\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987539 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.987548 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:30 crc kubenswrapper[4766]: I0130 17:56:30.992524 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data" (OuterVolumeSpecName: "config-data") pod "e39e170e-2256-4796-a06f-b1e63a1425cb" (UID: "e39e170e-2256-4796-a06f-b1e63a1425cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.089840 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e170e-2256-4796-a06f-b1e63a1425cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358902 4766 generic.go:334] "Generic (PLEG): container finished" podID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" exitCode=0 Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358953 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.358984 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e39e170e-2256-4796-a06f-b1e63a1425cb","Type":"ContainerDied","Data":"bd8c344070ab05d750472242fd65de8a04e107ddf96ae138a125614afab2f3d2"} Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.359001 4766 scope.go:117] "RemoveContainer" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.359012 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.385159 4766 scope.go:117] "RemoveContainer" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.408194 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.426248 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434203 4766 scope.go:117] "RemoveContainer" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.434677 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": container with ID starting with 8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236 not found: ID does not exist" containerID="8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434739 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236"} err="failed to get container status \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": rpc error: code = NotFound desc = could not find container \"8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236\": container with ID starting with 8f7986fdf0509a3b313a84b7431579c1b97d51683c10261eb94b9c452088e236 not found: ID does not exist" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.434768 4766 scope.go:117] "RemoveContainer" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.435195 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": container with ID starting with 4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168 not found: ID does not exist" containerID="4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.435224 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168"} err="failed to get container status \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": rpc error: code = NotFound desc = could not find container \"4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168\": container with ID starting with 4d1dcfb137f45248ef24c36aea3a345b897d78946e574c91d0a4df5b10898168 not found: ID does not exist" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438378 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.438855 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438874 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: E0130 17:56:31.438895 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.438902 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.439071 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="cinder-scheduler" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.439102 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" containerName="probe" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.440095 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.444539 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.447776 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498558 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.498677 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499302 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499356 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.499465 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603062 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603143 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603202 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603260 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603266 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/598edf34-3970-416e-b9fb-4de69de61ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603412 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.603435 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.617412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.617543 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.618004 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.618430 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598edf34-3970-416e-b9fb-4de69de61ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.621418 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm5z4\" (UniqueName: \"kubernetes.io/projected/598edf34-3970-416e-b9fb-4de69de61ca1-kube-api-access-rm5z4\") pod \"cinder-scheduler-0\" (UID: \"598edf34-3970-416e-b9fb-4de69de61ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 17:56:31 crc kubenswrapper[4766]: I0130 17:56:31.778524 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.053142 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39e170e-2256-4796-a06f-b1e63a1425cb" path="/var/lib/kubelet/pods/e39e170e-2256-4796-a06f-b1e63a1425cb/volumes" Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.264052 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:56:32 crc kubenswrapper[4766]: W0130 17:56:32.271322 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod598edf34_3970_416e_b9fb_4de69de61ca1.slice/crio-ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6 WatchSource:0}: Error finding container ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6: Status 404 returned error can't find the container with id ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6 Jan 30 17:56:32 crc kubenswrapper[4766]: I0130 17:56:32.377518 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"ed3148ec89b342916a0190ca83b69eb216d71af20531f264a17f34d538250ed6"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.392582 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"502ef3346c314657d354bff20d968ed7b9231399eab8fd575b2133cd6c7a0701"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.393398 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"598edf34-3970-416e-b9fb-4de69de61ca1","Type":"ContainerStarted","Data":"157ace3d760d4885134b3cac4a4f23aa79dd7bbc39cd2738fc79abde829f0bec"} Jan 30 17:56:33 crc kubenswrapper[4766]: I0130 17:56:33.421712 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.421691854 podStartE2EDuration="2.421691854s" podCreationTimestamp="2026-01-30 17:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:56:33.413853381 +0000 UTC m=+5648.051810727" watchObservedRunningTime="2026-01-30 17:56:33.421691854 +0000 UTC m=+5648.059649200" Jan 30 17:56:35 crc kubenswrapper[4766]: I0130 17:56:35.482646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:56:36 crc kubenswrapper[4766]: I0130 17:56:36.779374 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:56:39 crc kubenswrapper[4766]: I0130 17:56:39.045860 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:56:39 crc kubenswrapper[4766]: I0130 17:56:39.046265 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:56:42 crc kubenswrapper[4766]: I0130 17:56:42.002061 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045144 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045785 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.045831 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.046442 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.046531 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" gracePeriod=600 Jan 30 17:57:09 crc kubenswrapper[4766]: E0130 17:57:09.175284 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741513 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" exitCode=0 Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741620 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354"} Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.741934 4766 scope.go:117] "RemoveContainer" containerID="8e6d5be2cdd78ae95945579ba29f0735f8e5f2a5f43aacf73ebc0159baabfa78" Jan 30 17:57:09 crc kubenswrapper[4766]: I0130 17:57:09.743695 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:09 crc kubenswrapper[4766]: E0130 17:57:09.744066 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:19 crc kubenswrapper[4766]: I0130 17:57:19.968219 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:19 crc kubenswrapper[4766]: I0130 17:57:19.971128 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.010531 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020209 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020358 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.020619 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.121893 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122389 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122604 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.122927 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.147057 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"redhat-marketplace-7xd9g\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.301669 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.792471 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:20 crc kubenswrapper[4766]: I0130 17:57:20.834775 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"5ad2c1eba53c17a6fdc63d95b48d182c3d830b443e9884d0d531c8423ad14e81"} Jan 30 17:57:21 crc kubenswrapper[4766]: I0130 17:57:21.850622 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" exitCode=0 Jan 30 17:57:21 crc kubenswrapper[4766]: I0130 17:57:21.850746 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05"} Jan 30 17:57:22 crc kubenswrapper[4766]: I0130 17:57:22.860310 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} Jan 30 17:57:23 crc kubenswrapper[4766]: I0130 17:57:23.875598 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" exitCode=0 Jan 30 17:57:23 crc kubenswrapper[4766]: I0130 17:57:23.875681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} Jan 30 17:57:24 crc kubenswrapper[4766]: I0130 17:57:24.885384 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerStarted","Data":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} Jan 30 17:57:24 crc kubenswrapper[4766]: I0130 17:57:24.916168 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xd9g" podStartSLOduration=3.521994322 podStartE2EDuration="5.916148781s" podCreationTimestamp="2026-01-30 17:57:19 +0000 UTC" firstStartedPulling="2026-01-30 17:57:21.85486187 +0000 UTC m=+5696.492819216" lastFinishedPulling="2026-01-30 17:57:24.249016329 +0000 UTC m=+5698.886973675" observedRunningTime="2026-01-30 17:57:24.903913928 +0000 UTC m=+5699.541871314" watchObservedRunningTime="2026-01-30 17:57:24.916148781 +0000 UTC m=+5699.554106127" Jan 30 17:57:25 crc kubenswrapper[4766]: I0130 17:57:25.039658 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:25 crc kubenswrapper[4766]: E0130 17:57:25.040123 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.302136 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.302857 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.355474 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:30 crc kubenswrapper[4766]: I0130 17:57:30.987710 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:31 crc kubenswrapper[4766]: I0130 17:57:31.048764 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.544473 4766 scope.go:117] "RemoveContainer" containerID="2284b685070b20ff7f99a6b288edfe628604e9b16f379e70a8725075d3d9749a" Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.567360 4766 scope.go:117] "RemoveContainer" containerID="325111ae8b2b39896c73638f1c0026db7d59ab4097cfdf84ec6a851d0d088ecd" Jan 30 17:57:32 crc kubenswrapper[4766]: I0130 17:57:32.956282 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xd9g" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" containerID="cri-o://9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" gracePeriod=2 Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.429428 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528292 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528438 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.528464 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") pod \"73284976-6eff-4a55-b925-9d82571c7f79\" (UID: \"73284976-6eff-4a55-b925-9d82571c7f79\") " Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.531535 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities" (OuterVolumeSpecName: "utilities") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.535102 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz" (OuterVolumeSpecName: "kube-api-access-kdndz") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "kube-api-access-kdndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:57:33 crc kubenswrapper[4766]: I0130 17:57:33.551533 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73284976-6eff-4a55-b925-9d82571c7f79" (UID: "73284976-6eff-4a55-b925-9d82571c7f79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631319 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631363 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdndz\" (UniqueName: \"kubernetes.io/projected/73284976-6eff-4a55-b925-9d82571c7f79-kube-api-access-kdndz\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.631374 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73284976-6eff-4a55-b925-9d82571c7f79-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968045 4766 generic.go:334] "Generic (PLEG): container finished" podID="73284976-6eff-4a55-b925-9d82571c7f79" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" exitCode=0 Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968085 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968110 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xd9g" event={"ID":"73284976-6eff-4a55-b925-9d82571c7f79","Type":"ContainerDied","Data":"5ad2c1eba53c17a6fdc63d95b48d182c3d830b443e9884d0d531c8423ad14e81"} Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968126 4766 scope.go:117] "RemoveContainer" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:33.968323 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xd9g" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.592826 4766 scope.go:117] "RemoveContainer" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.604854 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.615749 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xd9g"] Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.634849 4766 scope.go:117] "RemoveContainer" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.685429 4766 scope.go:117] "RemoveContainer" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.687523 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": container with ID starting with 9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d not found: ID does not exist" containerID="9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.687565 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d"} err="failed to get container status \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": rpc error: code = NotFound desc = could not find container \"9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d\": container with ID starting with 9db3b26914f47e4c95b7d3f9ef56990a55920419a41ff416a50d4e83163c6f9d not found: ID does not exist" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.687615 4766 scope.go:117] "RemoveContainer" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.688167 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": container with ID starting with eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6 not found: ID does not exist" containerID="eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688213 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6"} err="failed to get container status \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": rpc error: code = NotFound desc = could not find container \"eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6\": container with ID starting with eeff4c334f54ceabf7a4ea4b97c342b0012a18553932e458b855bc22698be8e6 not found: ID does not exist" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688230 4766 scope.go:117] "RemoveContainer" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: E0130 17:57:34.688834 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": container with ID starting with 5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05 not found: ID does not exist" containerID="5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05" Jan 30 17:57:34 crc kubenswrapper[4766]: I0130 17:57:34.688860 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05"} err="failed to get container status \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": rpc error: code = NotFound desc = could not find container \"5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05\": container with ID starting with 5b1f8184a634e79e80c90c049517e3656d817b66da23ddb1a86347d3bc926e05 not found: ID does not exist" Jan 30 17:57:36 crc kubenswrapper[4766]: I0130 17:57:36.052976 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73284976-6eff-4a55-b925-9d82571c7f79" path="/var/lib/kubelet/pods/73284976-6eff-4a55-b925-9d82571c7f79/volumes" Jan 30 17:57:40 crc kubenswrapper[4766]: I0130 17:57:40.040348 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:40 crc kubenswrapper[4766]: E0130 17:57:40.041161 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.352121 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353240 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353256 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353277 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-utilities" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353285 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-utilities" Jan 30 17:57:44 crc kubenswrapper[4766]: E0130 17:57:44.353304 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-content" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353309 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="extract-content" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.353493 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="73284976-6eff-4a55-b925-9d82571c7f79" containerName="registry-server" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.354724 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.370348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434104 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434167 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.434217 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537789 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537878 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.537904 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.538643 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-catalog-content\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.539242 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84c1be7-4d75-42f5-a45d-cd83378aadca-utilities\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.586917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrxn\" (UniqueName: \"kubernetes.io/projected/d84c1be7-4d75-42f5-a45d-cd83378aadca-kube-api-access-lxrxn\") pod \"redhat-operators-hxmkb\" (UID: \"d84c1be7-4d75-42f5-a45d-cd83378aadca\") " pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:44 crc kubenswrapper[4766]: I0130 17:57:44.681361 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:57:45 crc kubenswrapper[4766]: I0130 17:57:45.162630 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.064998 4766 generic.go:334] "Generic (PLEG): container finished" podID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerID="b7788b564bdadb9d8530785901307d94fc47f8758660789b46508bb69321c392" exitCode=0 Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.065066 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerDied","Data":"b7788b564bdadb9d8530785901307d94fc47f8758660789b46508bb69321c392"} Jan 30 17:57:46 crc kubenswrapper[4766]: I0130 17:57:46.066580 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"75407a0832c673e238385866978541172268aca09f7f655c44988ed38c282199"} Jan 30 17:57:51 crc kubenswrapper[4766]: I0130 17:57:51.038922 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:57:51 crc kubenswrapper[4766]: E0130 17:57:51.039941 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:57:59 crc kubenswrapper[4766]: I0130 17:57:59.191457 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06"} Jan 30 17:58:00 crc kubenswrapper[4766]: I0130 17:58:00.203447 4766 generic.go:334] "Generic (PLEG): container finished" podID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerID="ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06" exitCode=0 Jan 30 17:58:00 crc kubenswrapper[4766]: I0130 17:58:00.203659 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerDied","Data":"ac508c4b87ba712911043d539a3b0e39f16d5bc1e6043c253c25bcd60f01ee06"} Jan 30 17:58:02 crc kubenswrapper[4766]: I0130 17:58:02.225004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxmkb" event={"ID":"d84c1be7-4d75-42f5-a45d-cd83378aadca","Type":"ContainerStarted","Data":"6b946ffa1f4b52c19c636d3b367874088190e5fd884d68c8436310a53d49129f"} Jan 30 17:58:02 crc kubenswrapper[4766]: I0130 17:58:02.261652 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxmkb" podStartSLOduration=3.641671693 podStartE2EDuration="18.26161944s" podCreationTimestamp="2026-01-30 17:57:44 +0000 UTC" firstStartedPulling="2026-01-30 17:57:46.067607793 +0000 UTC m=+5720.705565139" lastFinishedPulling="2026-01-30 17:58:00.68755554 +0000 UTC m=+5735.325512886" observedRunningTime="2026-01-30 17:58:02.253795247 +0000 UTC m=+5736.891752593" watchObservedRunningTime="2026-01-30 17:58:02.26161944 +0000 UTC m=+5736.899576786" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.039562 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:04 crc kubenswrapper[4766]: E0130 17:58:04.040154 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.683042 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:04 crc kubenswrapper[4766]: I0130 17:58:04.683362 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:05 crc kubenswrapper[4766]: I0130 17:58:05.739535 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:05 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:05 crc kubenswrapper[4766]: > Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.115574 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.118047 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.130105 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171079 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171158 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.171295 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.272798 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.272899 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273020 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273669 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.273920 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.295065 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"community-operators-jmmpk\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.438027 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:10 crc kubenswrapper[4766]: I0130 17:58:10.979052 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.295862 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24" exitCode=0 Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.295933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24"} Jan 30 17:58:11 crc kubenswrapper[4766]: I0130 17:58:11.296163 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"5cf6928557b6939990dc1e11354457a1ee4fcb0ad54a84fa252e26d53511f230"} Jan 30 17:58:12 crc kubenswrapper[4766]: I0130 17:58:12.313817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd"} Jan 30 17:58:13 crc kubenswrapper[4766]: I0130 17:58:13.323067 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd" exitCode=0 Jan 30 17:58:13 crc kubenswrapper[4766]: I0130 17:58:13.323128 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd"} Jan 30 17:58:14 crc kubenswrapper[4766]: I0130 17:58:14.334695 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerStarted","Data":"93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083"} Jan 30 17:58:14 crc kubenswrapper[4766]: I0130 17:58:14.359613 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmmpk" podStartSLOduration=1.883891839 podStartE2EDuration="4.359595664s" podCreationTimestamp="2026-01-30 17:58:10 +0000 UTC" firstStartedPulling="2026-01-30 17:58:11.297418739 +0000 UTC m=+5745.935376085" lastFinishedPulling="2026-01-30 17:58:13.773122564 +0000 UTC m=+5748.411079910" observedRunningTime="2026-01-30 17:58:14.35505334 +0000 UTC m=+5748.993010686" watchObservedRunningTime="2026-01-30 17:58:14.359595664 +0000 UTC m=+5748.997553010" Jan 30 17:58:15 crc kubenswrapper[4766]: I0130 17:58:15.728736 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:15 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:15 crc kubenswrapper[4766]: > Jan 30 17:58:16 crc kubenswrapper[4766]: I0130 17:58:16.049445 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:16 crc kubenswrapper[4766]: E0130 17:58:16.049704 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.439382 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.440077 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:20 crc kubenswrapper[4766]: I0130 17:58:20.485247 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.356929 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.358568 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.361124 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-z9mbd" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.361340 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.374490 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385535 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385601 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385622 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385721 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385847 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.385992 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.388367 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.403721 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.473869 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488546 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488806 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488879 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.488957 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489149 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489171 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489326 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489453 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489522 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489521 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-run-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.489545 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d8369af-eac5-4d31-b183-1a542da452c5-var-log-ovn\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.492214 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d8369af-eac5-4d31-b183-1a542da452c5-scripts\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.520732 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/8d8369af-eac5-4d31-b183-1a542da452c5-kube-api-access-xbqdc\") pod \"ovn-controller-k9frg\" (UID: \"8d8369af-eac5-4d31-b183-1a542da452c5\") " pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591248 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591980 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592103 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-run\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.591564 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-etc-ovs\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592346 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592612 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592738 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592663 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-log\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.592517 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aa514cb2-1f05-42a6-a181-f4f62250bd7c-var-lib\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.594977 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa514cb2-1f05-42a6-a181-f4f62250bd7c-scripts\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.610999 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrtl\" (UniqueName: \"kubernetes.io/projected/aa514cb2-1f05-42a6-a181-f4f62250bd7c-kube-api-access-dfrtl\") pod \"ovn-controller-ovs-b4vlg\" (UID: \"aa514cb2-1f05-42a6-a181-f4f62250bd7c\") " pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.683496 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:21 crc kubenswrapper[4766]: I0130 17:58:21.719002 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.166497 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.432009 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg" event={"ID":"8d8369af-eac5-4d31-b183-1a542da452c5","Type":"ContainerStarted","Data":"7e7e5a24829ec3b421b31fa6c1410ddd8fe104c7268e6838bb1161bbc508962b"} Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.610539 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4vlg"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.917913 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.919486 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.936542 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 17:58:22 crc kubenswrapper[4766]: I0130 17:58:22.966855 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.022922 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023018 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023100 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.023141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125555 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125816 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.125873 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.126629 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovs-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.126743 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-ovn-rundir\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.131589 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-config\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.168166 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7rz\" (UniqueName: \"kubernetes.io/projected/1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5-kube-api-access-6l7rz\") pod \"ovn-controller-metrics-8hgh6\" (UID: \"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5\") " pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.242992 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8hgh6" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.447498 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.447540 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"cf468ff065fb884ba4cf1173d1837a6b2420211ea82252bb4e199606c0139d64"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.451811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg" event={"ID":"8d8369af-eac5-4d31-b183-1a542da452c5","Type":"ContainerStarted","Data":"b986bc055709cbcc4703a88dca5184d3fc49b9385592ad3b2bfb2a90a8a769b4"} Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.452471 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-k9frg" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.499169 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-k9frg" podStartSLOduration=2.499149332 podStartE2EDuration="2.499149332s" podCreationTimestamp="2026-01-30 17:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:23.49757337 +0000 UTC m=+5758.135530726" watchObservedRunningTime="2026-01-30 17:58:23.499149332 +0000 UTC m=+5758.137106678" Jan 30 17:58:23 crc kubenswrapper[4766]: I0130 17:58:23.793896 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8hgh6"] Jan 30 17:58:23 crc kubenswrapper[4766]: W0130 17:58:23.798363 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a4fbcc6_ea61_45d4_b3c4_ecaf44f460c5.slice/crio-ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59 WatchSource:0}: Error finding container ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59: Status 404 returned error can't find the container with id ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.110793 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.111653 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmmpk" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" containerID="cri-o://93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" gracePeriod=2 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.469394 4766 generic.go:334] "Generic (PLEG): container finished" podID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerID="93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" exitCode=0 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.469487 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.477116 4766 generic.go:334] "Generic (PLEG): container finished" podID="aa514cb2-1f05-42a6-a181-f4f62250bd7c" containerID="143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445" exitCode=0 Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.477950 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerDied","Data":"143be8a0f811a00d67cd46ab09fd8b7f258bdc7b5d6dc1b23fe47b96043e7445"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.482605 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8hgh6" event={"ID":"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5","Type":"ContainerStarted","Data":"43435190680dd7c5bcead45a6b2a56d4fa91d33134a332f628615d3c5cc13704"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.483429 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8hgh6" event={"ID":"1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5","Type":"ContainerStarted","Data":"ecc1eb48bc9e4e9e7a961bda88f51c676a0719ec28e3491a039957c1ff928c59"} Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.550533 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8hgh6" podStartSLOduration=2.550508677 podStartE2EDuration="2.550508677s" podCreationTimestamp="2026-01-30 17:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:24.541151062 +0000 UTC m=+5759.179108418" watchObservedRunningTime="2026-01-30 17:58:24.550508677 +0000 UTC m=+5759.188466023" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.635691 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.636944 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.639271 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.654621 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.685031 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.685856 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686002 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") pod \"3a0dc221-4e00-4488-b09c-31ce4c70b735\" (UID: \"3a0dc221-4e00-4488-b09c-31ce4c70b735\") " Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686759 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.686929 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.690772 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities" (OuterVolumeSpecName: "utilities") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.722129 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh" (OuterVolumeSpecName: "kube-api-access-gjgjh") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "kube-api-access-gjgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.767624 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a0dc221-4e00-4488-b09c-31ce4c70b735" (UID: "3a0dc221-4e00-4488-b09c-31ce4c70b735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788197 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788640 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788700 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788712 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjgjh\" (UniqueName: \"kubernetes.io/projected/3a0dc221-4e00-4488-b09c-31ce4c70b735-kube-api-access-gjgjh\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.788721 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0dc221-4e00-4488-b09c-31ce4c70b735-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.789374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.807697 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"octavia-db-create-d22q5\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:24 crc kubenswrapper[4766]: I0130 17:58:24.968292 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.497800 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmmpk" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.497796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmmpk" event={"ID":"3a0dc221-4e00-4488-b09c-31ce4c70b735","Type":"ContainerDied","Data":"5cf6928557b6939990dc1e11354457a1ee4fcb0ad54a84fa252e26d53511f230"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.498398 4766 scope.go:117] "RemoveContainer" containerID="93f5d26fc1fca4c23cf2807e6521bc19bd8d2f655281164e00cfe3eb6836b083" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507253 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"bfd610bc75a97b55f0608c218cd67fffef18d73ffd384d902fcbc938a367bde2"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4vlg" event={"ID":"aa514cb2-1f05-42a6-a181-f4f62250bd7c","Type":"ContainerStarted","Data":"cd50fc48bc3b1dd5e97ea1f2783219fdb7826ffe66b540c7191f0eaec544b888"} Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507548 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.507926 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.527828 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.537902 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b4vlg" podStartSLOduration=4.537881222 podStartE2EDuration="4.537881222s" podCreationTimestamp="2026-01-30 17:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:25.537401989 +0000 UTC m=+5760.175359345" watchObservedRunningTime="2026-01-30 17:58:25.537881222 +0000 UTC m=+5760.175838568" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.545324 4766 scope.go:117] "RemoveContainer" containerID="cb0dbe766675ea9006eae26acaa59b2b4c2ffb4eb8a5039fa387c58aebde62fd" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.564432 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.583252 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmmpk"] Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.636385 4766 scope.go:117] "RemoveContainer" containerID="71e8eb07bd8d0652afe6f78cfb4afc70c271503071bd4f84e51ac5f2dd19ad24" Jan 30 17:58:25 crc kubenswrapper[4766]: I0130 17:58:25.755847 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxmkb" podUID="d84c1be7-4d75-42f5-a45d-cd83378aadca" containerName="registry-server" probeResult="failure" output=< Jan 30 17:58:25 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 17:58:25 crc kubenswrapper[4766]: > Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.064820 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" path="/var/lib/kubelet/pods/3a0dc221-4e00-4488-b09c-31ce4c70b735/volumes" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.077806 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078424 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-utilities" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078448 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-utilities" Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078493 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-content" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078518 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="extract-content" Jan 30 17:58:26 crc kubenswrapper[4766]: E0130 17:58:26.078533 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.078809 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0dc221-4e00-4488-b09c-31ce4c70b735" containerName="registry-server" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.079664 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.089753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.114056 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.118798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.118882 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.220843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.220931 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.222089 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.241364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"octavia-c8b6-account-create-update-vqz78\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.415695 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.568649 4766 generic.go:334] "Generic (PLEG): container finished" podID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerID="82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26" exitCode=0 Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.569282 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerDied","Data":"82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26"} Jan 30 17:58:26 crc kubenswrapper[4766]: I0130 17:58:26.569323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerStarted","Data":"1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c"} Jan 30 17:58:26 crc kubenswrapper[4766]: W0130 17:58:26.995564 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc91a16_cfbf_425d_bca1_f23f53f60beb.slice/crio-7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5 WatchSource:0}: Error finding container 7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5: Status 404 returned error can't find the container with id 7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5 Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.002348 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593485 4766 generic.go:334] "Generic (PLEG): container finished" podID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerID="c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5" exitCode=0 Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593555 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerDied","Data":"c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5"} Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.593941 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerStarted","Data":"7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5"} Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.950889 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.975102 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") pod \"944d7612-c3af-4bbd-b193-a2769b8d362d\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.975231 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") pod \"944d7612-c3af-4bbd-b193-a2769b8d362d\" (UID: \"944d7612-c3af-4bbd-b193-a2769b8d362d\") " Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.976338 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "944d7612-c3af-4bbd-b193-a2769b8d362d" (UID: "944d7612-c3af-4bbd-b193-a2769b8d362d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:27 crc kubenswrapper[4766]: I0130 17:58:27.981067 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm" (OuterVolumeSpecName: "kube-api-access-k96sm") pod "944d7612-c3af-4bbd-b193-a2769b8d362d" (UID: "944d7612-c3af-4bbd-b193-a2769b8d362d"). InnerVolumeSpecName "kube-api-access-k96sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.053517 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.059663 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.067671 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-q5td7"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.075170 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7780-account-create-update-96kcq"] Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.078987 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/944d7612-c3af-4bbd-b193-a2769b8d362d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.079017 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k96sm\" (UniqueName: \"kubernetes.io/projected/944d7612-c3af-4bbd-b193-a2769b8d362d-kube-api-access-k96sm\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.605384 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-d22q5" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.606162 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-d22q5" event={"ID":"944d7612-c3af-4bbd-b193-a2769b8d362d","Type":"ContainerDied","Data":"1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c"} Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.606204 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9f9ee339317f32652ca14791ecd0b014e024af02d6661810ffe67d8333cb7c" Jan 30 17:58:28 crc kubenswrapper[4766]: I0130 17:58:28.982156 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.102500 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") pod \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.102557 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") pod \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\" (UID: \"6fc91a16-cfbf-425d-bca1-f23f53f60beb\") " Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.103727 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fc91a16-cfbf-425d-bca1-f23f53f60beb" (UID: "6fc91a16-cfbf-425d-bca1-f23f53f60beb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.121981 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc" (OuterVolumeSpecName: "kube-api-access-cclnc") pod "6fc91a16-cfbf-425d-bca1-f23f53f60beb" (UID: "6fc91a16-cfbf-425d-bca1-f23f53f60beb"). InnerVolumeSpecName "kube-api-access-cclnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.205210 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cclnc\" (UniqueName: \"kubernetes.io/projected/6fc91a16-cfbf-425d-bca1-f23f53f60beb-kube-api-access-cclnc\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.205246 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc91a16-cfbf-425d-bca1-f23f53f60beb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620645 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-c8b6-account-create-update-vqz78" event={"ID":"6fc91a16-cfbf-425d-bca1-f23f53f60beb","Type":"ContainerDied","Data":"7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5"} Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620692 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7511c2f910e73880c57893eddebf7a72438e93478fbdcc69d3ccb57f2bd531e5" Jan 30 17:58:29 crc kubenswrapper[4766]: I0130 17:58:29.620751 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-c8b6-account-create-update-vqz78" Jan 30 17:58:30 crc kubenswrapper[4766]: I0130 17:58:30.051740 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0580a7-5f19-4aa4-893f-106812b15326" path="/var/lib/kubelet/pods/9d0580a7-5f19-4aa4-893f-106812b15326/volumes" Jan 30 17:58:30 crc kubenswrapper[4766]: I0130 17:58:30.052383 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09e2e76-7c0b-4efa-b226-18df0a512567" path="/var/lib/kubelet/pods/e09e2e76-7c0b-4efa-b226-18df0a512567/volumes" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.039704 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.040450 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.297714 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.298205 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298230 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: E0130 17:58:31.298261 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298269 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298490 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" containerName="mariadb-database-create" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.298530 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" containerName="mariadb-account-create-update" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.299211 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.308774 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.443864 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.443926 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.545835 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.545998 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.546645 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.564792 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"octavia-persistence-db-create-v77vj\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.625652 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.840404 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.842014 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.844514 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.851523 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.954016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:31 crc kubenswrapper[4766]: I0130 17:58:31.954142 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.056521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.056682 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.057652 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.076852 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"octavia-1019-account-create-update-skkw9\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.138638 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.168547 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.624566 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 17:58:32 crc kubenswrapper[4766]: W0130 17:58:32.627170 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c327fe8_260c_4117_b55e_3612be41da79.slice/crio-c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80 WatchSource:0}: Error finding container c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80: Status 404 returned error can't find the container with id c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80 Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.647724 4766 scope.go:117] "RemoveContainer" containerID="3c6e55bd0cf024ebee065ba107a5ecdfde761cb270a8d820adbc79b96576773c" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.691480 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerStarted","Data":"c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80"} Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.691558 4766 scope.go:117] "RemoveContainer" containerID="869db07172127624e0324810e45f248df650df66e4eafda3a0b74e7b81e90798" Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.694112 4766 generic.go:334] "Generic (PLEG): container finished" podID="0550f6c1-ed1f-405f-8420-507890f13d75" containerID="1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c" exitCode=0 Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.694151 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerDied","Data":"1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c"} Jan 30 17:58:32 crc kubenswrapper[4766]: I0130 17:58:32.698135 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerStarted","Data":"7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40"} Jan 30 17:58:33 crc kubenswrapper[4766]: I0130 17:58:33.936576 4766 generic.go:334] "Generic (PLEG): container finished" podID="0c327fe8-260c-4117-b55e-3612be41da79" containerID="b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a" exitCode=0 Jan 30 17:58:33 crc kubenswrapper[4766]: I0130 17:58:33.936682 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerDied","Data":"b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a"} Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.284107 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432280 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") pod \"0550f6c1-ed1f-405f-8420-507890f13d75\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432910 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") pod \"0550f6c1-ed1f-405f-8420-507890f13d75\" (UID: \"0550f6c1-ed1f-405f-8420-507890f13d75\") " Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.432953 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0550f6c1-ed1f-405f-8420-507890f13d75" (UID: "0550f6c1-ed1f-405f-8420-507890f13d75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.433450 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550f6c1-ed1f-405f-8420-507890f13d75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.438398 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7" (OuterVolumeSpecName: "kube-api-access-h4pk7") pod "0550f6c1-ed1f-405f-8420-507890f13d75" (UID: "0550f6c1-ed1f-405f-8420-507890f13d75"). InnerVolumeSpecName "kube-api-access-h4pk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.535472 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pk7\" (UniqueName: \"kubernetes.io/projected/0550f6c1-ed1f-405f-8420-507890f13d75-kube-api-access-h4pk7\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.727694 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.773234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxmkb" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.835116 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxmkb"] Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947439 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-v77vj" event={"ID":"0550f6c1-ed1f-405f-8420-507890f13d75","Type":"ContainerDied","Data":"7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40"} Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947480 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7292c9fa31e9bd7c25c242e8e419f4d28576cb67cf96f255b28e665c6f3dbc40" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.947601 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-v77vj" Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.967808 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:34 crc kubenswrapper[4766]: I0130 17:58:34.968077 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ck55d" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" containerID="cri-o://0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" gracePeriod=2 Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.066245 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.083060 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6hlg5"] Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.392883 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.558810 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.560847 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") pod \"0c327fe8-260c-4117-b55e-3612be41da79\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.560975 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") pod \"0c327fe8-260c-4117-b55e-3612be41da79\" (UID: \"0c327fe8-260c-4117-b55e-3612be41da79\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.561733 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c327fe8-260c-4117-b55e-3612be41da79" (UID: "0c327fe8-260c-4117-b55e-3612be41da79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.566519 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz" (OuterVolumeSpecName: "kube-api-access-t9xrz") pod "0c327fe8-260c-4117-b55e-3612be41da79" (UID: "0c327fe8-260c-4117-b55e-3612be41da79"). InnerVolumeSpecName "kube-api-access-t9xrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662547 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662723 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.662786 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") pod \"e775d594-6680-4e4a-8b1f-01f3a0738015\" (UID: \"e775d594-6680-4e4a-8b1f-01f3a0738015\") " Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities" (OuterVolumeSpecName: "utilities") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663364 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9xrz\" (UniqueName: \"kubernetes.io/projected/0c327fe8-260c-4117-b55e-3612be41da79-kube-api-access-t9xrz\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.663516 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c327fe8-260c-4117-b55e-3612be41da79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.666564 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft" (OuterVolumeSpecName: "kube-api-access-5lvft") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "kube-api-access-5lvft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.766985 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.767278 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvft\" (UniqueName: \"kubernetes.io/projected/e775d594-6680-4e4a-8b1f-01f3a0738015-kube-api-access-5lvft\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.777812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e775d594-6680-4e4a-8b1f-01f3a0738015" (UID: "e775d594-6680-4e4a-8b1f-01f3a0738015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.869557 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e775d594-6680-4e4a-8b1f-01f3a0738015-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.959911 4766 generic.go:334] "Generic (PLEG): container finished" podID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" exitCode=0 Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960064 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ck55d" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960918 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.960981 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ck55d" event={"ID":"e775d594-6680-4e4a-8b1f-01f3a0738015","Type":"ContainerDied","Data":"f894c54809796e9bc955e9c65573180850c5025aad67c7a860801cd7fd7de425"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.961002 4766 scope.go:117] "RemoveContainer" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.964979 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-1019-account-create-update-skkw9" event={"ID":"0c327fe8-260c-4117-b55e-3612be41da79","Type":"ContainerDied","Data":"c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80"} Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.965020 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f8933a9425a9aac79f59116905112ce4e3f31f532c7fdbadffacb63d566a80" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.965035 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-1019-account-create-update-skkw9" Jan 30 17:58:35 crc kubenswrapper[4766]: I0130 17:58:35.986040 4766 scope.go:117] "RemoveContainer" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.000846 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.029574 4766 scope.go:117] "RemoveContainer" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.038918 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ck55d"] Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.053891 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a04cef9-eaad-4fba-9aa9-0f15ed426885" path="/var/lib/kubelet/pods/7a04cef9-eaad-4fba-9aa9-0f15ed426885/volumes" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.054491 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" path="/var/lib/kubelet/pods/e775d594-6680-4e4a-8b1f-01f3a0738015/volumes" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.056223 4766 scope.go:117] "RemoveContainer" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057078 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": container with ID starting with 0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee not found: ID does not exist" containerID="0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057111 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee"} err="failed to get container status \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": rpc error: code = NotFound desc = could not find container \"0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee\": container with ID starting with 0aca5a8d90310b8c5dc83366f011e64837ec6305eaa5f37b461361c778c239ee not found: ID does not exist" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057130 4766 scope.go:117] "RemoveContainer" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057507 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": container with ID starting with f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd not found: ID does not exist" containerID="f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057548 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd"} err="failed to get container status \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": rpc error: code = NotFound desc = could not find container \"f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd\": container with ID starting with f0512c8bbc0a86dc1e032c89616f9648e9e30da9036a2694b12318eb2817a1dd not found: ID does not exist" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057575 4766 scope.go:117] "RemoveContainer" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: E0130 17:58:36.057948 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": container with ID starting with cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54 not found: ID does not exist" containerID="cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54" Jan 30 17:58:36 crc kubenswrapper[4766]: I0130 17:58:36.057976 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54"} err="failed to get container status \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": rpc error: code = NotFound desc = could not find container \"cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54\": container with ID starting with cda6ca3941388968472d6bc22ee2e166c80fab033e008666ff71f186be586a54 not found: ID does not exist" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176001 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176441 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176454 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176469 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-utilities" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176475 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-utilities" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176491 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176497 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176513 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176521 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: E0130 17:58:37.176542 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-content" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176551 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="extract-content" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176729 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e775d594-6680-4e4a-8b1f-01f3a0738015" containerName="registry-server" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176737 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c327fe8-260c-4117-b55e-3612be41da79" containerName="mariadb-account-create-update" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.176747 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" containerName="mariadb-database-create" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.178033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.180938 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.180985 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-2h9xz" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.181009 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.189556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195023 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195070 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195111 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195138 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.195159 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.297564 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298000 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298240 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298269 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298306 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298827 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data-merged\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.298893 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/0eb984d4-df63-4a4e-b808-e30c97f6f606-octavia-run\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.302862 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-combined-ca-bundle\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.314770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-scripts\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.317058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb984d4-df63-4a4e-b808-e30c97f6f606-config-data\") pod \"octavia-api-5c95b64c75-5mhgs\" (UID: \"0eb984d4-df63-4a4e-b808-e30c97f6f606\") " pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:37 crc kubenswrapper[4766]: I0130 17:58:37.498891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:38 crc kubenswrapper[4766]: I0130 17:58:38.132216 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5c95b64c75-5mhgs"] Jan 30 17:58:38 crc kubenswrapper[4766]: W0130 17:58:38.143951 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb984d4_df63_4a4e_b808_e30c97f6f606.slice/crio-faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566 WatchSource:0}: Error finding container faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566: Status 404 returned error can't find the container with id faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566 Jan 30 17:58:38 crc kubenswrapper[4766]: I0130 17:58:38.991817 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"faf04933e5e63a749aefb17151f84b059d72581faa2000348392bfd5a90b0566"} Jan 30 17:58:45 crc kubenswrapper[4766]: I0130 17:58:45.040546 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:45 crc kubenswrapper[4766]: E0130 17:58:45.042520 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:48 crc kubenswrapper[4766]: I0130 17:58:48.035823 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:58:48 crc kubenswrapper[4766]: I0130 17:58:48.051205 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zr744"] Jan 30 17:58:50 crc kubenswrapper[4766]: I0130 17:58:50.051199 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c267d58-0d99-463b-9011-34118e7f961a" path="/var/lib/kubelet/pods/9c267d58-0d99-463b-9011-34118e7f961a/volumes" Jan 30 17:58:53 crc kubenswrapper[4766]: I0130 17:58:53.147343 4766 generic.go:334] "Generic (PLEG): container finished" podID="0eb984d4-df63-4a4e-b808-e30c97f6f606" containerID="03043db2deda6cf603d122dd759c870251f616eb6f723b24bbdfb636cc6e75be" exitCode=0 Jan 30 17:58:53 crc kubenswrapper[4766]: I0130 17:58:53.147408 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerDied","Data":"03043db2deda6cf603d122dd759c870251f616eb6f723b24bbdfb636cc6e75be"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.159508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"efe7b09ed864756182729316e61ca03a5eb0cbef21aee43310bada2149c9ffb3"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.160209 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5c95b64c75-5mhgs" event={"ID":"0eb984d4-df63-4a4e-b808-e30c97f6f606","Type":"ContainerStarted","Data":"beee8500ff584654976b6f044659339aa9095538b851e561ae295d5fdc9064a4"} Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.160233 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.764167 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-5c95b64c75-5mhgs" podStartSLOduration=3.5937698080000002 podStartE2EDuration="17.764139865s" podCreationTimestamp="2026-01-30 17:58:37 +0000 UTC" firstStartedPulling="2026-01-30 17:58:38.146868084 +0000 UTC m=+5772.784825430" lastFinishedPulling="2026-01-30 17:58:52.317238141 +0000 UTC m=+5786.955195487" observedRunningTime="2026-01-30 17:58:54.182446335 +0000 UTC m=+5788.820403681" watchObservedRunningTime="2026-01-30 17:58:54.764139865 +0000 UTC m=+5789.402097221" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.767731 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.770500 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774523 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774557 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.774619 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.779843 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.863998 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864408 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864614 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.864829 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966862 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.966942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.967990 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-hm-ports\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.968325 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data-merged\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.985901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-config-data\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:54 crc kubenswrapper[4766]: I0130 17:58:54.986787 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37d87bf7-0bd7-4201-b0e3-0d1b8062c930-scripts\") pod \"octavia-rsyslog-l7mdv\" (UID: \"37d87bf7-0bd7-4201-b0e3-0d1b8062c930\") " pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.104488 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.191355 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.379642 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.384648 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.388201 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.393570 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.484212 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.484283 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.586679 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.587128 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.587678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.595053 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"octavia-image-upload-59f8cff499-kprnv\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.712556 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.727752 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:58:55 crc kubenswrapper[4766]: I0130 17:58:55.849467 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-l7mdv"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.111421 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.114605 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.117280 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.128129 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.206979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.207135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.207310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.211203 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.218472 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.224170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"626b26395f7ce3aae2dc570650fe62987d13d3b3d64bd55bae6643135934c3bb"} Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.312898 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.313901 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.313956 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.314054 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.316602 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.324326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.326828 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.328064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"octavia-db-sync-8nm42\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.439891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.763676 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k9frg" podUID="8d8369af-eac5-4d31-b183-1a542da452c5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 17:58:56 crc kubenswrapper[4766]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 17:58:56 crc kubenswrapper[4766]: > Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.786735 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.788456 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4vlg" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.913426 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.914641 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.917815 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.923212 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.935724 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936102 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936163 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936292 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.936319 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:56 crc kubenswrapper[4766]: I0130 17:58:56.963149 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.037726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038063 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038130 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038153 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038203 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.038501 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039099 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039631 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.039766 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.040611 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:58:57 crc kubenswrapper[4766]: E0130 17:58:57.041071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.058340 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"ovn-controller-k9frg-config-8htbk\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.237904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"3dbe81e46eed52883df8dfc889eb0ab8c07352aa770fac3bae2b8943846bbc9f"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.240170 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.240219 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d"} Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.247922 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:58:57 crc kubenswrapper[4766]: I0130 17:58:57.788409 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:58:58 crc kubenswrapper[4766]: I0130 17:58:58.254199 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerID="31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303" exitCode=0 Jan 30 17:58:58 crc kubenswrapper[4766]: I0130 17:58:58.254262 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.263610 4766 generic.go:334] "Generic (PLEG): container finished" podID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerID="622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3" exitCode=0 Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.263667 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerDied","Data":"622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.264326 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerStarted","Data":"f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.268470 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerStarted","Data":"ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a"} Jan 30 17:58:59 crc kubenswrapper[4766]: I0130 17:58:59.326119 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-8nm42" podStartSLOduration=3.326094811 podStartE2EDuration="3.326094811s" podCreationTimestamp="2026-01-30 17:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:58:59.305046729 +0000 UTC m=+5793.943004075" watchObservedRunningTime="2026-01-30 17:58:59.326094811 +0000 UTC m=+5793.964052147" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.287614 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b"} Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.794116 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920596 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920824 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.920962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921032 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921254 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.921311 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") pod \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\" (UID: \"f4f4b9f0-b0d7-490c-984f-b50a40b2b723\") " Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925817 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925942 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run" (OuterVolumeSpecName: "var-run") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.925950 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.926562 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts" (OuterVolumeSpecName: "scripts") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.926760 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:59:00 crc kubenswrapper[4766]: I0130 17:59:00.950259 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4" (OuterVolumeSpecName: "kube-api-access-zkmp4") pod "f4f4b9f0-b0d7-490c-984f-b50a40b2b723" (UID: "f4f4b9f0-b0d7-490c-984f-b50a40b2b723"). InnerVolumeSpecName "kube-api-access-zkmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025298 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkmp4\" (UniqueName: \"kubernetes.io/projected/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-kube-api-access-zkmp4\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025348 4766 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025363 4766 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025375 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025389 4766 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.025401 4766 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4f4b9f0-b0d7-490c-984f-b50a40b2b723-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299815 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9frg-config-8htbk" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9frg-config-8htbk" event={"ID":"f4f4b9f0-b0d7-490c-984f-b50a40b2b723","Type":"ContainerDied","Data":"f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446"} Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.299889 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48a768c35bd6923498f26636554b3d5843d8e3e24e9068eee17555bd7ab0446" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.725485 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-k9frg" Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.881740 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:59:01 crc kubenswrapper[4766]: I0130 17:59:01.893607 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-k9frg-config-8htbk"] Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.050651 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" path="/var/lib/kubelet/pods/f4f4b9f0-b0d7-490c-984f-b50a40b2b723/volumes" Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.309578 4766 generic.go:334] "Generic (PLEG): container finished" podID="37d87bf7-0bd7-4201-b0e3-0d1b8062c930" containerID="c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b" exitCode=0 Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.309634 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerDied","Data":"c378fedde629874c7b167ebd5f1cc93d0ed0243ac98f90ed4616430a0502cf1b"} Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.313835 4766 generic.go:334] "Generic (PLEG): container finished" podID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerID="ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a" exitCode=0 Jan 30 17:59:02 crc kubenswrapper[4766]: I0130 17:59:02.313877 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a"} Jan 30 17:59:10 crc kubenswrapper[4766]: I0130 17:59:10.040170 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:10 crc kubenswrapper[4766]: E0130 17:59:10.041135 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.658627 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.838865 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839424 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839485 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.839519 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") pod \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\" (UID: \"fd5031f6-51af-4f63-8bc4-4a518f58ddd4\") " Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.845211 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts" (OuterVolumeSpecName: "scripts") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.846334 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data" (OuterVolumeSpecName: "config-data") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.862735 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.866453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5031f6-51af-4f63-8bc4-4a518f58ddd4" (UID: "fd5031f6-51af-4f63-8bc4-4a518f58ddd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942397 4766 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942444 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942457 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:11 crc kubenswrapper[4766]: I0130 17:59:11.942468 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5031f6-51af-4f63-8bc4-4a518f58ddd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.351318 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.356636 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5c95b64c75-5mhgs" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441323 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-8nm42" event={"ID":"fd5031f6-51af-4f63-8bc4-4a518f58ddd4","Type":"ContainerDied","Data":"2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d"} Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441357 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-8nm42" Jan 30 17:59:12 crc kubenswrapper[4766]: I0130 17:59:12.441374 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c806951b029baf7fccea5984672ed2b0aa381a3215c57028fdc906444227a0d" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.585806 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.587925 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-kprnv_openstack(20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:59:12 crc kubenswrapper[4766]: E0130 17:59:12.589246 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.451699 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-l7mdv" event={"ID":"37d87bf7-0bd7-4201-b0e3-0d1b8062c930","Type":"ContainerStarted","Data":"4f36cf88151b4841c0255a4ec15b29656e21be70b64fa7810306e1a52ce7136a"} Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.452809 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:59:13 crc kubenswrapper[4766]: E0130 17:59:13.454527 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" Jan 30 17:59:13 crc kubenswrapper[4766]: I0130 17:59:13.495990 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-l7mdv" podStartSLOduration=2.490489973 podStartE2EDuration="19.495971295s" podCreationTimestamp="2026-01-30 17:58:54 +0000 UTC" firstStartedPulling="2026-01-30 17:58:55.72099005 +0000 UTC m=+5790.358947396" lastFinishedPulling="2026-01-30 17:59:12.726471372 +0000 UTC m=+5807.364428718" observedRunningTime="2026-01-30 17:59:13.492608484 +0000 UTC m=+5808.130565830" watchObservedRunningTime="2026-01-30 17:59:13.495971295 +0000 UTC m=+5808.133928641" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.674893 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677030 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="init" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677300 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="init" Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677430 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677507 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: E0130 17:59:14.677605 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.677686 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.678038 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f4b9f0-b0d7-490c-984f-b50a40b2b723" containerName="ovn-config" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.683111 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" containerName="octavia-db-sync" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.688789 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.691762 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.712663 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.712933 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.713134 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815039 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815141 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815212 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.815908 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.816206 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:14 crc kubenswrapper[4766]: I0130 17:59:14.836356 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"certified-operators-xcxq8\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:15 crc kubenswrapper[4766]: I0130 17:59:15.055051 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:15 crc kubenswrapper[4766]: I0130 17:59:15.621911 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:15 crc kubenswrapper[4766]: W0130 17:59:15.624728 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4834c01_4dd4_4f39_aa18_6abc2d33686c.slice/crio-1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d WatchSource:0}: Error finding container 1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d: Status 404 returned error can't find the container with id 1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486148 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e" exitCode=0 Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486254 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e"} Jan 30 17:59:16 crc kubenswrapper[4766]: I0130 17:59:16.486552 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d"} Jan 30 17:59:17 crc kubenswrapper[4766]: I0130 17:59:17.498047 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c"} Jan 30 17:59:18 crc kubenswrapper[4766]: I0130 17:59:18.510703 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c" exitCode=0 Jan 30 17:59:18 crc kubenswrapper[4766]: I0130 17:59:18.510766 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c"} Jan 30 17:59:19 crc kubenswrapper[4766]: I0130 17:59:19.521418 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerStarted","Data":"6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678"} Jan 30 17:59:19 crc kubenswrapper[4766]: I0130 17:59:19.542952 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcxq8" podStartSLOduration=3.116917657 podStartE2EDuration="5.542934052s" podCreationTimestamp="2026-01-30 17:59:14 +0000 UTC" firstStartedPulling="2026-01-30 17:59:16.48786137 +0000 UTC m=+5811.125818716" lastFinishedPulling="2026-01-30 17:59:18.913877765 +0000 UTC m=+5813.551835111" observedRunningTime="2026-01-30 17:59:19.537285049 +0000 UTC m=+5814.175242395" watchObservedRunningTime="2026-01-30 17:59:19.542934052 +0000 UTC m=+5814.180891398" Jan 30 17:59:22 crc kubenswrapper[4766]: I0130 17:59:22.040129 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:22 crc kubenswrapper[4766]: E0130 17:59:22.040940 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.056003 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.056459 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.102199 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.140646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-l7mdv" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.650535 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:25 crc kubenswrapper[4766]: I0130 17:59:25.705782 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:26 crc kubenswrapper[4766]: I0130 17:59:26.598069 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7"} Jan 30 17:59:27 crc kubenswrapper[4766]: I0130 17:59:27.613791 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcxq8" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" containerID="cri-o://6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" gracePeriod=2 Jan 30 17:59:28 crc kubenswrapper[4766]: I0130 17:59:28.629346 4766 generic.go:334] "Generic (PLEG): container finished" podID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerID="6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" exitCode=0 Jan 30 17:59:28 crc kubenswrapper[4766]: I0130 17:59:28.629481 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.357562 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435514 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435596 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.435648 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") pod \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\" (UID: \"e4834c01-4dd4-4f39-aa18-6abc2d33686c\") " Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.436793 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities" (OuterVolumeSpecName: "utilities") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.443912 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6" (OuterVolumeSpecName: "kube-api-access-bs6b6") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "kube-api-access-bs6b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.507653 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4834c01-4dd4-4f39-aa18-6abc2d33686c" (UID: "e4834c01-4dd4-4f39-aa18-6abc2d33686c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537582 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537623 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4834c01-4dd4-4f39-aa18-6abc2d33686c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.537642 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs6b6\" (UniqueName: \"kubernetes.io/projected/e4834c01-4dd4-4f39-aa18-6abc2d33686c-kube-api-access-bs6b6\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655508 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcxq8" event={"ID":"e4834c01-4dd4-4f39-aa18-6abc2d33686c","Type":"ContainerDied","Data":"1291f9f1e44e0679180ad2235f81ea96c5da16673a5005302ee7baa4eb70f06d"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655555 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcxq8" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.655579 4766 scope.go:117] "RemoveContainer" containerID="6dad50dbe6259fb1422608f3a8180cd5de4d5f6edc03a2ba8666d0ebac69d678" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.663731 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerID="9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7" exitCode=0 Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.663811 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7"} Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.698145 4766 scope.go:117] "RemoveContainer" containerID="380006adfe28b44508cc25aa4e9baefd90b8e39115de3b2518d212a23a88586c" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.721719 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.732384 4766 scope.go:117] "RemoveContainer" containerID="dd126a225572a0e6279e5b77c45922224f194737968711dc6b9a6dd0e122c09e" Jan 30 17:59:29 crc kubenswrapper[4766]: I0130 17:59:29.732619 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcxq8"] Jan 30 17:59:30 crc kubenswrapper[4766]: I0130 17:59:30.052385 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" path="/var/lib/kubelet/pods/e4834c01-4dd4-4f39-aa18-6abc2d33686c/volumes" Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.691327 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerStarted","Data":"e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39"} Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.846362 4766 scope.go:117] "RemoveContainer" containerID="bc8079f8c0ccd370bc3a3a51529041c82b6352c79d4171184261059c45df6bfa" Jan 30 17:59:32 crc kubenswrapper[4766]: I0130 17:59:32.886997 4766 scope.go:117] "RemoveContainer" containerID="a65fe77666bd1dd89a9c3e39317ec3bd94cd2f336d1abf824947e6dcb6ba640a" Jan 30 17:59:35 crc kubenswrapper[4766]: I0130 17:59:35.039637 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:35 crc kubenswrapper[4766]: E0130 17:59:35.040209 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:47 crc kubenswrapper[4766]: I0130 17:59:47.040078 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:47 crc kubenswrapper[4766]: E0130 17:59:47.041031 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.633913 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podStartSLOduration=21.949871511 podStartE2EDuration="57.633892876s" podCreationTimestamp="2026-01-30 17:58:55 +0000 UTC" firstStartedPulling="2026-01-30 17:58:56.229787128 +0000 UTC m=+5790.867744474" lastFinishedPulling="2026-01-30 17:59:31.913808493 +0000 UTC m=+5826.551765839" observedRunningTime="2026-01-30 17:59:32.715618122 +0000 UTC m=+5827.353575468" watchObservedRunningTime="2026-01-30 17:59:52.633892876 +0000 UTC m=+5847.271850222" Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.645199 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.645453 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-kprnv" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" containerID="cri-o://e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" gracePeriod=30 Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.949998 4766 generic.go:334] "Generic (PLEG): container finished" podID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerID="e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" exitCode=0 Jan 30 17:59:52 crc kubenswrapper[4766]: I0130 17:59:52.950435 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39"} Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.204285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.241415 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") pod \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.241520 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") pod \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\" (UID: \"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09\") " Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.286221 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" (UID: "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.326812 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" (UID: "20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.344548 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.344603 4766 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961467 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-kprnv" event={"ID":"20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09","Type":"ContainerDied","Data":"3dbe81e46eed52883df8dfc889eb0ab8c07352aa770fac3bae2b8943846bbc9f"} Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961524 4766 scope.go:117] "RemoveContainer" containerID="e2f5208b28788e01c9bd188a40ecfb44c883ee355f35a749ceb30f8df75e9e39" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.961521 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-kprnv" Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.997042 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:53 crc kubenswrapper[4766]: I0130 17:59:53.998254 4766 scope.go:117] "RemoveContainer" containerID="9288a1db61282484e649eb58946ed94646c5ab6baa0a6167232dee58508adef7" Jan 30 17:59:54 crc kubenswrapper[4766]: I0130 17:59:54.006331 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-kprnv"] Jan 30 17:59:54 crc kubenswrapper[4766]: I0130 17:59:54.056156 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" path="/var/lib/kubelet/pods/20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09/volumes" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.221974 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223068 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-content" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223081 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-content" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223102 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223108 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223132 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223140 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223149 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="init" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223156 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="init" Jan 30 17:59:57 crc kubenswrapper[4766]: E0130 17:59:57.223170 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-utilities" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223190 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="extract-utilities" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223375 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c8eeb5-37d8-4bf9-8dcd-e6a7f2d9ac09" containerName="octavia-amphora-httpd" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.223389 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4834c01-4dd4-4f39-aa18-6abc2d33686c" containerName="registry-server" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.224388 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.232722 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.232878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.320541 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.320657 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.421655 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.421791 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.422336 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/a2dd03c7-c095-4563-9107-802624d1e4f5-amphora-image\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.428984 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a2dd03c7-c095-4563-9107-802624d1e4f5-httpd-config\") pod \"octavia-image-upload-59f8cff499-b9qv6\" (UID: \"a2dd03c7-c095-4563-9107-802624d1e4f5\") " pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:57 crc kubenswrapper[4766]: I0130 17:59:57.589963 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" Jan 30 17:59:58 crc kubenswrapper[4766]: I0130 17:59:58.066196 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-b9qv6"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.008150 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee"} Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.008563 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"7bfc45edc16a9568b1b93dc25fc7b80b90ce50c9462614d0b5ef7a5a2181ea6a"} Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.039074 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 17:59:59 crc kubenswrapper[4766]: E0130 17:59:59.039349 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.419226 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.420891 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423044 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423349 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.423593 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.429339 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472684 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472716 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472775 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472826 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.472853 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574500 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574528 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574695 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.574719 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.576303 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1c79d934-7880-4883-bee6-c60ea7745616-config-data-merged\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.577584 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1c79d934-7880-4883-bee6-c60ea7745616-hm-ports\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.587358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-amphora-certs\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.587516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-scripts\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.588148 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-combined-ca-bundle\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.588351 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79d934-7880-4883-bee6-c60ea7745616-config-data\") pod \"octavia-healthmanager-422fs\" (UID: \"1c79d934-7880-4883-bee6-c60ea7745616\") " pod="openstack/octavia-healthmanager-422fs" Jan 30 17:59:59 crc kubenswrapper[4766]: I0130 17:59:59.749885 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.147505 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.149253 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.152527 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.152542 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.165074 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191066 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.191362 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292847 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292935 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.292962 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.295974 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.300995 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.311770 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.313258 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"collect-profiles-29496600-spdnq\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.482849 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:00 crc kubenswrapper[4766]: I0130 18:00:00.986703 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq"] Jan 30 18:00:00 crc kubenswrapper[4766]: W0130 18:00:00.989635 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaef52df_0dea_425d_ac97_09334d4d44bf.slice/crio-1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf WatchSource:0}: Error finding container 1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf: Status 404 returned error can't find the container with id 1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.025796 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.025846 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"0c9862552caf41cbd5f638444ecf4b54dfc7a0268d5453a489b3d0fa94a6938c"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.031114 4766 generic.go:334] "Generic (PLEG): container finished" podID="a2dd03c7-c095-4563-9107-802624d1e4f5" containerID="e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee" exitCode=0 Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.031334 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerDied","Data":"e5af812f61446effd4db02a5680ea069f94a7df7166099b6970c932818b1caee"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.033815 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerStarted","Data":"1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf"} Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.501878 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.503708 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.505557 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.512484 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.522782 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.618951 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.619654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.619754 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620041 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620135 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.620237 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721601 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721669 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721737 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.721858 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.722511 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data-merged\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.722839 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-hm-ports\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.730678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-scripts\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.734550 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-amphora-certs\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.735038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-config-data\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.745447 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a-combined-ca-bundle\") pod \"octavia-housekeeping-f25c5\" (UID: \"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a\") " pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:01 crc kubenswrapper[4766]: I0130 18:00:01.859531 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.044756 4766 generic.go:334] "Generic (PLEG): container finished" podID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerID="630dd27806d1cf4d4d5c6404849501d2feb6f83451792694c5d5c0e9409fa40e" exitCode=0 Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.053045 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerDied","Data":"630dd27806d1cf4d4d5c6404849501d2feb6f83451792694c5d5c0e9409fa40e"} Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.365799 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f25c5"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.596781 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.599397 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.603222 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.604329 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.606364 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638613 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638728 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.638856 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.639308 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741520 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741641 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741707 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.742381 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5aade569-1bea-4133-8ea3-51cea870143d-config-data-merged\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.741809 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.742723 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.743487 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5aade569-1bea-4133-8ea3-51cea870143d-hm-ports\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749052 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-combined-ca-bundle\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749058 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-config-data\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749241 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-amphora-certs\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.749268 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aade569-1bea-4133-8ea3-51cea870143d-scripts\") pod \"octavia-worker-qrfbg\" (UID: \"5aade569-1bea-4133-8ea3-51cea870143d\") " pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:02 crc kubenswrapper[4766]: I0130 18:00:02.917412 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.057736 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"fac55d0c967812e98dc725e19eb5f5fbc02f58bbe462c23ee392098dbdd974c2"} Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.510245 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-qrfbg"] Jan 30 18:00:03 crc kubenswrapper[4766]: W0130 18:00:03.535475 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aade569_1bea_4133_8ea3_51cea870143d.slice/crio-4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8 WatchSource:0}: Error finding container 4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8: Status 404 returned error can't find the container with id 4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8 Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.547801 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571273 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571434 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.571526 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") pod \"eaef52df-0dea-425d-ac97-09334d4d44bf\" (UID: \"eaef52df-0dea-425d-ac97-09334d4d44bf\") " Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.573369 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.580100 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.580638 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl" (OuterVolumeSpecName: "kube-api-access-ftltl") pod "eaef52df-0dea-425d-ac97-09334d4d44bf" (UID: "eaef52df-0dea-425d-ac97-09334d4d44bf"). InnerVolumeSpecName "kube-api-access-ftltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.673796 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftltl\" (UniqueName: \"kubernetes.io/projected/eaef52df-0dea-425d-ac97-09334d4d44bf-kube-api-access-ftltl\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.675000 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaef52df-0dea-425d-ac97-09334d4d44bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:03 crc kubenswrapper[4766]: I0130 18:00:03.675066 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaef52df-0dea-425d-ac97-09334d4d44bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.069731 4766 generic.go:334] "Generic (PLEG): container finished" podID="1c79d934-7880-4883-bee6-c60ea7745616" containerID="3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c" exitCode=0 Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.070373 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerDied","Data":"3725f34e1ccb8b31dc9940969a4050287625592f0d18c088cda3834f58c9655c"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.077387 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.078143 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-spdnq" event={"ID":"eaef52df-0dea-425d-ac97-09334d4d44bf","Type":"ContainerDied","Data":"1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.078207 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1952b855f783d1c096a055762c6f9dd3a2a7e77b1bc9815fc98a26b7bf9fedcf" Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.079132 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"4d468e515aae54e10fc3cde8d96560ca07aaf017f9f7e81455e5dd64aea90ab8"} Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.625560 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.640810 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-l7zjm"] Jan 30 18:00:04 crc kubenswrapper[4766]: I0130 18:00:04.654604 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-422fs"] Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.122974 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-422fs" event={"ID":"1c79d934-7880-4883-bee6-c60ea7745616","Type":"ContainerStarted","Data":"aa75a431b6569eb0fd2b042b2f24d3adf9d3e399b96bbd129c4518bba6afa585"} Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.123610 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:05 crc kubenswrapper[4766]: I0130 18:00:05.161451 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-422fs" podStartSLOduration=6.161414184 podStartE2EDuration="6.161414184s" podCreationTimestamp="2026-01-30 17:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:00:05.151573887 +0000 UTC m=+5859.789531223" watchObservedRunningTime="2026-01-30 18:00:05.161414184 +0000 UTC m=+5859.799371540" Jan 30 18:00:06 crc kubenswrapper[4766]: I0130 18:00:06.058954 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c37317-bc31-4749-bf2a-000f3786ebdb" path="/var/lib/kubelet/pods/20c37317-bc31-4749-bf2a-000f3786ebdb/volumes" Jan 30 18:00:06 crc kubenswrapper[4766]: I0130 18:00:06.140461 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" event={"ID":"a2dd03c7-c095-4563-9107-802624d1e4f5","Type":"ContainerStarted","Data":"28540abc823fad4e2aecdd708f90c28850196ec3b4fb5e1876b10103520bcc9f"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.149489 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.151433 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a"} Jan 30 18:00:07 crc kubenswrapper[4766]: I0130 18:00:07.173465 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-b9qv6" podStartSLOduration=3.487921605 podStartE2EDuration="10.173443743s" podCreationTimestamp="2026-01-30 17:59:57 +0000 UTC" firstStartedPulling="2026-01-30 17:59:58.068884815 +0000 UTC m=+5852.706842161" lastFinishedPulling="2026-01-30 18:00:04.754406953 +0000 UTC m=+5859.392364299" observedRunningTime="2026-01-30 18:00:06.157419447 +0000 UTC m=+5860.795376793" watchObservedRunningTime="2026-01-30 18:00:07.173443743 +0000 UTC m=+5861.811401089" Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.174832 4766 generic.go:334] "Generic (PLEG): container finished" podID="5aade569-1bea-4133-8ea3-51cea870143d" containerID="5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb" exitCode=0 Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.174925 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerDied","Data":"5527f6ae343fe79d80e6d78898deafe47bede7b9d73a5f66c1bad456982201bb"} Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.177132 4766 generic.go:334] "Generic (PLEG): container finished" podID="7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a" containerID="e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a" exitCode=0 Jan 30 18:00:10 crc kubenswrapper[4766]: I0130 18:00:10.177199 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerDied","Data":"e62c53658244c6993e0692dd43c1db69147e0ab8e7a7528ae8479fffc0ba174a"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.189096 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f25c5" event={"ID":"7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a","Type":"ContainerStarted","Data":"c8abed3e0876d78fcfbca64cfcab0d25b7920f91fec1ac552487164b0fb4d18b"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.189637 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.191655 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-qrfbg" event={"ID":"5aade569-1bea-4133-8ea3-51cea870143d","Type":"ContainerStarted","Data":"1ed70f33dc959997aef972353763f2d4044b2f90403962aaac7d9747f5c05eac"} Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.191892 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.212050 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-f25c5" podStartSLOduration=6.526098213 podStartE2EDuration="10.212031489s" podCreationTimestamp="2026-01-30 18:00:01 +0000 UTC" firstStartedPulling="2026-01-30 18:00:02.37716615 +0000 UTC m=+5857.015123496" lastFinishedPulling="2026-01-30 18:00:06.063099426 +0000 UTC m=+5860.701056772" observedRunningTime="2026-01-30 18:00:11.206419436 +0000 UTC m=+5865.844376802" watchObservedRunningTime="2026-01-30 18:00:11.212031489 +0000 UTC m=+5865.849988835" Jan 30 18:00:11 crc kubenswrapper[4766]: I0130 18:00:11.233433 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-qrfbg" podStartSLOduration=6.667833168 podStartE2EDuration="9.233409952s" podCreationTimestamp="2026-01-30 18:00:02 +0000 UTC" firstStartedPulling="2026-01-30 18:00:03.543428113 +0000 UTC m=+5858.181385459" lastFinishedPulling="2026-01-30 18:00:06.109004897 +0000 UTC m=+5860.746962243" observedRunningTime="2026-01-30 18:00:11.229930166 +0000 UTC m=+5865.867887522" watchObservedRunningTime="2026-01-30 18:00:11.233409952 +0000 UTC m=+5865.871367298" Jan 30 18:00:14 crc kubenswrapper[4766]: I0130 18:00:14.041428 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:14 crc kubenswrapper[4766]: E0130 18:00:14.042332 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:14 crc kubenswrapper[4766]: I0130 18:00:14.807559 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-422fs" Jan 30 18:00:16 crc kubenswrapper[4766]: I0130 18:00:16.891534 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-f25c5" Jan 30 18:00:17 crc kubenswrapper[4766]: I0130 18:00:17.949612 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-qrfbg" Jan 30 18:00:29 crc kubenswrapper[4766]: I0130 18:00:29.040369 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:29 crc kubenswrapper[4766]: E0130 18:00:29.041679 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:33 crc kubenswrapper[4766]: I0130 18:00:33.007415 4766 scope.go:117] "RemoveContainer" containerID="e7a7edb57ac3d27e7b4d4cf72feb542694a5d4be05f6296f5473eacbc813a28b" Jan 30 18:00:44 crc kubenswrapper[4766]: I0130 18:00:44.039721 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:44 crc kubenswrapper[4766]: E0130 18:00:44.040684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:00:55 crc kubenswrapper[4766]: I0130 18:00:55.039664 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:00:55 crc kubenswrapper[4766]: E0130 18:00:55.040616 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.151787 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:00 crc kubenswrapper[4766]: E0130 18:01:00.152978 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.152999 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.153297 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaef52df-0dea-425d-ac97-09334d4d44bf" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.154194 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.166779 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.256497 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.256921 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.257025 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.257141 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359263 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359316 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359359 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.359402 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.366970 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.367218 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.369132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.375146 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"keystone-cron-29496601-pl6qc\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.529816 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.852995 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.855260 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.858752 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-frkxg" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.858924 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.859041 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.859224 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.883820 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921540 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921816 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" containerID="cri-o://cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.921964 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" containerID="cri-o://ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976397 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976461 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976512 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976538 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.976582 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.985754 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.985998 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" containerID="cri-o://155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" gracePeriod=30 Jan 30 18:01:00 crc kubenswrapper[4766]: I0130 18:01:00.986464 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" containerID="cri-o://ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" gracePeriod=30 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.044600 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-pl6qc"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.060150 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.062205 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.069591 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089056 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089224 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089365 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089398 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.089499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.090143 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.090808 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.095812 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.096081 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.113472 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"horizon-5b8665dc85-mqdzq\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.180578 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.191543 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.191894 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192057 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192382 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.192562 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295587 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295670 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295753 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.295972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.296302 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.298287 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.299326 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.305719 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.321891 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"horizon-646c4b5b47-xr8w7\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.412820 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.463842 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.512833 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.514601 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.535097 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.602997 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603128 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603193 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603247 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.603301 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.692092 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704499 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704607 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704643 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704686 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.704735 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.705440 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.705585 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.706102 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.710562 4766 generic.go:334] "Generic (PLEG): container finished" podID="40e23b5f-28fc-4354-94de-90d54908e61b" containerID="155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" exitCode=143 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.710648 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.738280 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.762434 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"horizon-7c4d556457-cgwh5\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.773206 4766 generic.go:334] "Generic (PLEG): container finished" podID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerID="cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" exitCode=143 Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.773284 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.832416 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerStarted","Data":"c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.832460 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerStarted","Data":"0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3"} Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.891587 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496601-pl6qc" podStartSLOduration=1.891562028 podStartE2EDuration="1.891562028s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:01.879666634 +0000 UTC m=+5916.517623980" watchObservedRunningTime="2026-01-30 18:01:01.891562028 +0000 UTC m=+5916.529519374" Jan 30 18:01:01 crc kubenswrapper[4766]: I0130 18:01:01.892895 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.088391 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:02 crc kubenswrapper[4766]: W0130 18:01:02.095953 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8806aa45_5ae9_453c_8bc8_23fe8daa8e9d.slice/crio-4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e WatchSource:0}: Error finding container 4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e: Status 404 returned error can't find the container with id 4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e Jan 30 18:01:02 crc kubenswrapper[4766]: W0130 18:01:02.398390 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24a2653_c901_4306_a56b_2e2de8006403.slice/crio-cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91 WatchSource:0}: Error finding container cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91: Status 404 returned error can't find the container with id cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91 Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.398601 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.843120 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e"} Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.845001 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"826a0c9cee53980f380468b130146d783aa7261856c38f2757af740808b26324"} Jan 30 18:01:02 crc kubenswrapper[4766]: I0130 18:01:02.845938 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91"} Jan 30 18:01:03 crc kubenswrapper[4766]: I0130 18:01:03.856606 4766 generic.go:334] "Generic (PLEG): container finished" podID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerID="c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1" exitCode=0 Jan 30 18:01:03 crc kubenswrapper[4766]: I0130 18:01:03.856704 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerDied","Data":"c70e0ed778d72191d9df042a51eab1bfa041969650181ccb900bd84b9e95d7d1"} Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.052098 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.056870 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.065692 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v7zdn"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.073938 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-360a-account-create-update-9fwlc"] Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.866536 4766 generic.go:334] "Generic (PLEG): container finished" podID="40e23b5f-28fc-4354-94de-90d54908e61b" containerID="ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" exitCode=0 Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.866672 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a"} Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.871669 4766 generic.go:334] "Generic (PLEG): container finished" podID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerID="ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" exitCode=0 Jan 30 18:01:04 crc kubenswrapper[4766]: I0130 18:01:04.871904 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae"} Jan 30 18:01:06 crc kubenswrapper[4766]: I0130 18:01:06.069249 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9" path="/var/lib/kubelet/pods/3d2bd9b1-3f21-43b5-ab17-c0724bbbafd9/volumes" Jan 30 18:01:06 crc kubenswrapper[4766]: I0130 18:01:06.070346 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa06091d-37e1-4828-9f71-7160f12ac3de" path="/var/lib/kubelet/pods/aa06091d-37e1-4828-9f71-7160f12ac3de/volumes" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.040095 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:09 crc kubenswrapper[4766]: E0130 18:01:09.040844 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.151326 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.161035 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.174957 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268325 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268392 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268498 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268532 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268583 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268620 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268667 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268717 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268785 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268816 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268836 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268859 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268886 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268938 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268962 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.268991 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") pod \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\" (UID: \"7946b0e6-2de2-4708-ac83-ce1ad398d8a5\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.269027 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") pod \"5d20810a-2efe-43c6-a8e6-92a14834a048\" (UID: \"5d20810a-2efe-43c6-a8e6-92a14834a048\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.269046 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") pod \"40e23b5f-28fc-4354-94de-90d54908e61b\" (UID: \"40e23b5f-28fc-4354-94de-90d54908e61b\") " Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.270560 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275218 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275217 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs" (OuterVolumeSpecName: "logs") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.275523 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs" (OuterVolumeSpecName: "logs") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280474 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g" (OuterVolumeSpecName: "kube-api-access-42l2g") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "kube-api-access-42l2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280729 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq" (OuterVolumeSpecName: "kube-api-access-z6dvq") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "kube-api-access-z6dvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280745 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.280972 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts" (OuterVolumeSpecName: "scripts") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.281941 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts" (OuterVolumeSpecName: "scripts") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.286424 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94" (OuterVolumeSpecName: "kube-api-access-wkf94") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "kube-api-access-wkf94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.289654 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph" (OuterVolumeSpecName: "ceph") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.290311 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph" (OuterVolumeSpecName: "ceph") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.333396 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.345319 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.372992 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373038 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkf94\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-kube-api-access-wkf94\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373050 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373061 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373070 4766 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373079 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40e23b5f-28fc-4354-94de-90d54908e61b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373092 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373101 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373110 4766 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.373119 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389487 4766 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389558 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l2g\" (UniqueName: \"kubernetes.io/projected/5d20810a-2efe-43c6-a8e6-92a14834a048-kube-api-access-42l2g\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389572 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6dvq\" (UniqueName: \"kubernetes.io/projected/40e23b5f-28fc-4354-94de-90d54908e61b-kube-api-access-z6dvq\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.389584 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.402516 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.410359 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data" (OuterVolumeSpecName: "config-data") pod "7946b0e6-2de2-4708-ac83-ce1ad398d8a5" (UID: "7946b0e6-2de2-4708-ac83-ce1ad398d8a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.434977 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data" (OuterVolumeSpecName: "config-data") pod "5d20810a-2efe-43c6-a8e6-92a14834a048" (UID: "5d20810a-2efe-43c6-a8e6-92a14834a048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.449441 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data" (OuterVolumeSpecName: "config-data") pod "40e23b5f-28fc-4354-94de-90d54908e61b" (UID: "40e23b5f-28fc-4354-94de-90d54908e61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491942 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491983 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d20810a-2efe-43c6-a8e6-92a14834a048-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.491995 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7946b0e6-2de2-4708-ac83-ce1ad398d8a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.492009 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e23b5f-28fc-4354-94de-90d54908e61b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.929658 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7946b0e6-2de2-4708-ac83-ce1ad398d8a5","Type":"ContainerDied","Data":"d2a4e4fc66535588e46fed562ba402562d5ce80fbfd5a96ef9e01d567df2004b"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.930274 4766 scope.go:117] "RemoveContainer" containerID="ba7a3a0bd3b87ff213481ded18b09fe05a378481a605d5c64f141f56bfac1eae" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.930602 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942850 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-pl6qc" event={"ID":"5d20810a-2efe-43c6-a8e6-92a14834a048","Type":"ContainerDied","Data":"0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942891 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0506a31b4302c185010f640115c79ac98b2bccb6af61fe517bf39b47f821ddd3" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.942964 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-pl6qc" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.973604 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981411 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981688 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerStarted","Data":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981848 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b8665dc85-mqdzq" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" containerID="cri-o://a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" gracePeriod=30 Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.981829 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b8665dc85-mqdzq" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" containerID="cri-o://f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" gracePeriod=30 Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.988253 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.989259 4766 scope.go:117] "RemoveContainer" containerID="cad90a5294d7a585930cf768d8e7c6d25d6344d562eb3235af5a3bc1a335ef10" Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.995426 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969"} Jan 30 18:01:09 crc kubenswrapper[4766]: I0130 18:01:09.995469 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerStarted","Data":"90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217"} Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.004649 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40e23b5f-28fc-4354-94de-90d54908e61b","Type":"ContainerDied","Data":"a636aed8819668fe27e888c223782c929538ea199ee28b047c4b35c7334f0992"} Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.004759 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.021810 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.034393 4766 scope.go:117] "RemoveContainer" containerID="ad6524bde7488d90070d2ccbcc60c3eedc219f1cc8c7fa871d2af523184d894a" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.068291 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" path="/var/lib/kubelet/pods/7946b0e6-2de2-4708-ac83-ce1ad398d8a5/volumes" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.068928 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069694 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069733 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069747 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069757 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069779 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069785 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069796 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069802 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: E0130 18:01:10.069825 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.069830 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070082 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070103 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="7946b0e6-2de2-4708-ac83-ce1ad398d8a5" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070118 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-log" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070127 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" containerName="glance-httpd" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.070138 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d20810a-2efe-43c6-a8e6-92a14834a048" containerName="keystone-cron" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.071876 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.075998 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.076159 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fmg4z" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.076160 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.086671 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-646c4b5b47-xr8w7" podStartSLOduration=2.93493563 podStartE2EDuration="10.086646602s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="2026-01-30 18:01:02.101454507 +0000 UTC m=+5916.739411843" lastFinishedPulling="2026-01-30 18:01:09.253165469 +0000 UTC m=+5923.891122815" observedRunningTime="2026-01-30 18:01:10.013024516 +0000 UTC m=+5924.650981862" watchObservedRunningTime="2026-01-30 18:01:10.086646602 +0000 UTC m=+5924.724603948" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.108971 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.110639 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.110717 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111001 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111074 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c4d556457-cgwh5" podStartSLOduration=2.259421653 podStartE2EDuration="9.111059728s" podCreationTimestamp="2026-01-30 18:01:01 +0000 UTC" firstStartedPulling="2026-01-30 18:01:02.400725203 +0000 UTC m=+5917.038682549" lastFinishedPulling="2026-01-30 18:01:09.252363278 +0000 UTC m=+5923.890320624" observedRunningTime="2026-01-30 18:01:10.058843874 +0000 UTC m=+5924.696801240" watchObservedRunningTime="2026-01-30 18:01:10.111059728 +0000 UTC m=+5924.749017074" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111132 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111169 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111213 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.111393 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.134978 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.137931 4766 scope.go:117] "RemoveContainer" containerID="155d7b6244102b757f3100d53fae683f2499dd63e37d81e454b339bfe1fcf7f8" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.152606 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h2fkl"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.152597 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b8665dc85-mqdzq" podStartSLOduration=2.702462426 podStartE2EDuration="10.152578509s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="2026-01-30 18:01:01.832344954 +0000 UTC m=+5916.470302300" lastFinishedPulling="2026-01-30 18:01:09.282461037 +0000 UTC m=+5923.920418383" observedRunningTime="2026-01-30 18:01:10.101242401 +0000 UTC m=+5924.739199747" watchObservedRunningTime="2026-01-30 18:01:10.152578509 +0000 UTC m=+5924.790535855" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214007 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214712 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.214741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215074 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215152 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215356 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215381 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215875 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-logs\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.215964 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ddc1af26-668d-4715-b17a-e94ee4f5b571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.217494 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.218289 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.219156 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-config-data\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.223412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddc1af26-668d-4715-b17a-e94ee4f5b571-scripts\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.223980 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-ceph\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.234985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz9z\" (UniqueName: \"kubernetes.io/projected/ddc1af26-668d-4715-b17a-e94ee4f5b571-kube-api-access-jhz9z\") pod \"glance-default-external-api-0\" (UID: \"ddc1af26-668d-4715-b17a-e94ee4f5b571\") " pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.235333 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.247256 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.249338 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.252334 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.258222 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.316584 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.316618 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317011 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317091 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317145 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317231 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.317304 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420486 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420590 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420661 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420685 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420835 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.420908 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.421536 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.422331 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.424767 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.425733 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.427358 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.428795 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.431091 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.444446 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fg2\" (UniqueName: \"kubernetes.io/projected/c25f82b3-9296-4814-92b1-59ca5c2bf2a0-kube-api-access-46fg2\") pod \"glance-default-internal-api-0\" (UID: \"c25f82b3-9296-4814-92b1-59ca5c2bf2a0\") " pod="openstack/glance-default-internal-api-0" Jan 30 18:01:10 crc kubenswrapper[4766]: I0130 18:01:10.639653 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.022986 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.033887 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerStarted","Data":"cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b"} Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.181094 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.256000 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 18:01:11 crc kubenswrapper[4766]: W0130 18:01:11.257487 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc25f82b3_9296_4814_92b1_59ca5c2bf2a0.slice/crio-5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12 WatchSource:0}: Error finding container 5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12: Status 404 returned error can't find the container with id 5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12 Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.419149 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.419243 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.898981 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:11 crc kubenswrapper[4766]: I0130 18:01:11.900682 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.062445 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e23b5f-28fc-4354-94de-90d54908e61b" path="/var/lib/kubelet/pods/40e23b5f-28fc-4354-94de-90d54908e61b/volumes" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.063581 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2" path="/var/lib/kubelet/pods/b44aa37b-c8cd-4c65-80e0-ab8a8ccdecf2/volumes" Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.081388 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"a6c6a8fe72b93334fbe4ee005ea34677fc95740aecfaa7da8f15120190f5ff3a"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.081479 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"10079542e3100095ec78d3606b78f1758626c8beaa2fe23967895215c1e592a3"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.092306 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"46fe3470a1e8c952a21e5a5c56b106e1450a25e6dcc09ddea71d13186d5cc7eb"} Jan 30 18:01:12 crc kubenswrapper[4766]: I0130 18:01:12.092354 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"5859ac9de7d745405631ba23197b358bcc787498de54dbabd3496654db837c12"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.125902 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c25f82b3-9296-4814-92b1-59ca5c2bf2a0","Type":"ContainerStarted","Data":"b17245b90c6d6cee0a37f27383e5d755d7649b7adee324e9e95cb666eb4c8082"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.130494 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ddc1af26-668d-4715-b17a-e94ee4f5b571","Type":"ContainerStarted","Data":"6a9dfdebbeb7368534cf4006d3c920e47e62b9e8722cc6b77f9bacb63b7b7dcf"} Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.159331 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.1593078549999998 podStartE2EDuration="3.159307855s" podCreationTimestamp="2026-01-30 18:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:13.146024213 +0000 UTC m=+5927.783981569" watchObservedRunningTime="2026-01-30 18:01:13.159307855 +0000 UTC m=+5927.797265201" Jan 30 18:01:13 crc kubenswrapper[4766]: I0130 18:01:13.182210 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.182188599 podStartE2EDuration="4.182188599s" podCreationTimestamp="2026-01-30 18:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:13.179976069 +0000 UTC m=+5927.817933415" watchObservedRunningTime="2026-01-30 18:01:13.182188599 +0000 UTC m=+5927.820145945" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.431321 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.433236 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.471325 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.480986 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.641547 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.641703 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.671300 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:20 crc kubenswrapper[4766]: I0130 18:01:20.682455 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.039407 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:21 crc kubenswrapper[4766]: E0130 18:01:21.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216925 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216960 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.216969 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.217376 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.415193 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:21 crc kubenswrapper[4766]: I0130 18:01:21.897196 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.230694 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.231070 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.230743 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.231186 4766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.375261 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.443652 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.694214 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:23 crc kubenswrapper[4766]: I0130 18:01:23.767820 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.082596 4766 scope.go:117] "RemoveContainer" containerID="61e9004b9e632e72beed11f4761ff65b41d449187e767891bb96ba3995cb339f" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.107759 4766 scope.go:117] "RemoveContainer" containerID="f8e723715c56394706bb110f28e25bd51569d6ba082c9fb3e8b9a75ae2fcfda9" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.159130 4766 scope.go:117] "RemoveContainer" containerID="b8510fbc15448bdb8f9309d677310c9146372ad00679154fc9bdb8459d54cf36" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.424270 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:33 crc kubenswrapper[4766]: I0130 18:01:33.848665 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.084933 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.451583 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550268 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550478 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" containerID="cri-o://5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" gracePeriod=30 Jan 30 18:01:35 crc kubenswrapper[4766]: I0130 18:01:35.550987 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" containerID="cri-o://cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" gracePeriod=30 Jan 30 18:01:36 crc kubenswrapper[4766]: I0130 18:01:36.062105 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:36 crc kubenswrapper[4766]: E0130 18:01:36.062754 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:38 crc kubenswrapper[4766]: I0130 18:01:38.049606 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 18:01:38 crc kubenswrapper[4766]: I0130 18:01:38.052091 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jdcqq"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.027312 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.036203 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7364-account-create-update-5qkkz"] Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.399737 4766 generic.go:334] "Generic (PLEG): container finished" podID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerID="cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" exitCode=0 Jan 30 18:01:39 crc kubenswrapper[4766]: I0130 18:01:39.399805 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.055522 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632e98c6-d202-4c07-9220-636bd07da76d" path="/var/lib/kubelet/pods/632e98c6-d202-4c07-9220-636bd07da76d/volumes" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.056768 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d09a627-470a-4719-a1d8-458eda413878" path="/var/lib/kubelet/pods/9d09a627-470a-4719-a1d8-458eda413878/volumes" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.404560 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.424970 4766 generic.go:334] "Generic (PLEG): container finished" podID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" exitCode=137 Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425016 4766 generic.go:334] "Generic (PLEG): container finished" podID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" exitCode=137 Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425048 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425103 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425122 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b8665dc85-mqdzq" event={"ID":"c267e584-67ae-40ca-90dc-5967ee8be5d5","Type":"ContainerDied","Data":"826a0c9cee53980f380468b130146d783aa7261856c38f2757af740808b26324"} Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425142 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.425627 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b8665dc85-mqdzq" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520561 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520751 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520832 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.520976 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.521010 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") pod \"c267e584-67ae-40ca-90dc-5967ee8be5d5\" (UID: \"c267e584-67ae-40ca-90dc-5967ee8be5d5\") " Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.522809 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs" (OuterVolumeSpecName: "logs") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.540308 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.540511 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq" (OuterVolumeSpecName: "kube-api-access-n87tq") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "kube-api-access-n87tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.553763 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data" (OuterVolumeSpecName: "config-data") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.556453 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts" (OuterVolumeSpecName: "scripts") pod "c267e584-67ae-40ca-90dc-5967ee8be5d5" (UID: "c267e584-67ae-40ca-90dc-5967ee8be5d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.620063 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623783 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n87tq\" (UniqueName: \"kubernetes.io/projected/c267e584-67ae-40ca-90dc-5967ee8be5d5-kube-api-access-n87tq\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623816 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623829 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c267e584-67ae-40ca-90dc-5967ee8be5d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623840 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c267e584-67ae-40ca-90dc-5967ee8be5d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.623852 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c267e584-67ae-40ca-90dc-5967ee8be5d5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638225 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: E0130 18:01:40.638628 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638664 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} err="failed to get container status \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638685 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: E0130 18:01:40.638929 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638949 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} err="failed to get container status \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.638963 4766 scope.go:117] "RemoveContainer" containerID="a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639282 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e"} err="failed to get container status \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": rpc error: code = NotFound desc = could not find container \"a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e\": container with ID starting with a73127ab6a6bd59712e798984e6b455a31980634cc22dc15ea7787471289979e not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639334 4766 scope.go:117] "RemoveContainer" containerID="f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.639569 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24"} err="failed to get container status \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": rpc error: code = NotFound desc = could not find container \"f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24\": container with ID starting with f888ebaacfd27da2d94ec0c0eb3b9f06bd1a64f1ce632a7c2270c12a0a281a24 not found: ID does not exist" Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.761899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:40 crc kubenswrapper[4766]: I0130 18:01:40.770806 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b8665dc85-mqdzq"] Jan 30 18:01:41 crc kubenswrapper[4766]: I0130 18:01:41.414531 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:42 crc kubenswrapper[4766]: I0130 18:01:42.051379 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" path="/var/lib/kubelet/pods/c267e584-67ae-40ca-90dc-5967ee8be5d5/volumes" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.531188 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:43 crc kubenswrapper[4766]: E0130 18:01:43.531937 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.531952 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: E0130 18:01:43.531995 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532004 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532170 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.532207 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c267e584-67ae-40ca-90dc-5967ee8be5d5" containerName="horizon-log" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.533647 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.580544 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591502 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591567 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591630 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591674 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.591764 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693577 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693690 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693746 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693853 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.693885 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.694775 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-scripts\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.695670 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7b06d45-03c9-406f-8fc0-79428ec9de8f-logs\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.696917 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b06d45-03c9-406f-8fc0-79428ec9de8f-config-data\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.706220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7b06d45-03c9-406f-8fc0-79428ec9de8f-horizon-secret-key\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.716001 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58brx\" (UniqueName: \"kubernetes.io/projected/f7b06d45-03c9-406f-8fc0-79428ec9de8f-kube-api-access-58brx\") pod \"horizon-757f4f657-jzgr8\" (UID: \"f7b06d45-03c9-406f-8fc0-79428ec9de8f\") " pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:43 crc kubenswrapper[4766]: I0130 18:01:43.857883 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.331331 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-757f4f657-jzgr8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.464795 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"e4b9e99da2dd13870511ef416acc84ef67b11b1bc1f720de214806be59e7a4ff"} Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.827977 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.829613 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.839309 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.920114 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.921468 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.923238 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.924228 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.924738 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:44 crc kubenswrapper[4766]: I0130 18:01:44.936965 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028016 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028209 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028328 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.028767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.029344 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.058328 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"heat-db-create-qr4v8\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.130765 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.130843 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.131582 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.148256 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.149224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"heat-3460-account-create-update-759zj\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.275674 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.478666 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"c89e08693cd603929762da2a1d881688bf7fe83e4451e349d8b763a26fa9d7a2"} Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.478716 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-757f4f657-jzgr8" event={"ID":"f7b06d45-03c9-406f-8fc0-79428ec9de8f","Type":"ContainerStarted","Data":"b524c7b94ab2f980c470347bfa38008ac97401f82df71c662f598776c49d58a3"} Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.510508 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-757f4f657-jzgr8" podStartSLOduration=2.510486369 podStartE2EDuration="2.510486369s" podCreationTimestamp="2026-01-30 18:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:45.502921553 +0000 UTC m=+5960.140878899" watchObservedRunningTime="2026-01-30 18:01:45.510486369 +0000 UTC m=+5960.148443715" Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.614528 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:01:45 crc kubenswrapper[4766]: W0130 18:01:45.774037 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf43513bc_2d21_47b3_8acb_b331c5f5f46f.slice/crio-c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089 WatchSource:0}: Error finding container c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089: Status 404 returned error can't find the container with id c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089 Jan 30 18:01:45 crc kubenswrapper[4766]: I0130 18:01:45.784552 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.488958 4766 generic.go:334] "Generic (PLEG): container finished" podID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerID="2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513" exitCode=0 Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.489018 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerDied","Data":"2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.489458 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerStarted","Data":"c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492437 4766 generic.go:334] "Generic (PLEG): container finished" podID="8ac9189d-ff73-4cd5-8299-276858527c74" containerID="fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5" exitCode=0 Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492487 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerDied","Data":"fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5"} Jan 30 18:01:46 crc kubenswrapper[4766]: I0130 18:01:46.492533 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerStarted","Data":"e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7"} Jan 30 18:01:47 crc kubenswrapper[4766]: I0130 18:01:47.052598 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 18:01:47 crc kubenswrapper[4766]: I0130 18:01:47.063684 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6cksv"] Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.086815 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1262aa38-ee4d-4579-b034-3669dd58a238" path="/var/lib/kubelet/pods/1262aa38-ee4d-4579-b034-3669dd58a238/volumes" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.110786 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.120061 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193385 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") pod \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193572 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") pod \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\" (UID: \"f43513bc-2d21-47b3-8acb-b331c5f5f46f\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193669 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") pod \"8ac9189d-ff73-4cd5-8299-276858527c74\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.193708 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") pod \"8ac9189d-ff73-4cd5-8299-276858527c74\" (UID: \"8ac9189d-ff73-4cd5-8299-276858527c74\") " Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.197336 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f43513bc-2d21-47b3-8acb-b331c5f5f46f" (UID: "f43513bc-2d21-47b3-8acb-b331c5f5f46f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.197651 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ac9189d-ff73-4cd5-8299-276858527c74" (UID: "8ac9189d-ff73-4cd5-8299-276858527c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.208522 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg" (OuterVolumeSpecName: "kube-api-access-626hg") pod "f43513bc-2d21-47b3-8acb-b331c5f5f46f" (UID: "f43513bc-2d21-47b3-8acb-b331c5f5f46f"). InnerVolumeSpecName "kube-api-access-626hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.215414 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2" (OuterVolumeSpecName: "kube-api-access-xxcc2") pod "8ac9189d-ff73-4cd5-8299-276858527c74" (UID: "8ac9189d-ff73-4cd5-8299-276858527c74"). InnerVolumeSpecName "kube-api-access-xxcc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297090 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626hg\" (UniqueName: \"kubernetes.io/projected/f43513bc-2d21-47b3-8acb-b331c5f5f46f-kube-api-access-626hg\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297138 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43513bc-2d21-47b3-8acb-b331c5f5f46f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297147 4766 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ac9189d-ff73-4cd5-8299-276858527c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.297158 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcc2\" (UniqueName: \"kubernetes.io/projected/8ac9189d-ff73-4cd5-8299-276858527c74-kube-api-access-xxcc2\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.512986 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3460-account-create-update-759zj" event={"ID":"f43513bc-2d21-47b3-8acb-b331c5f5f46f","Type":"ContainerDied","Data":"c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089"} Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.513031 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ac3beff238d977b85f7351dc801c9788a23a826a6bd96eee8b251b30573089" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.513033 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3460-account-create-update-759zj" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514880 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qr4v8" event={"ID":"8ac9189d-ff73-4cd5-8299-276858527c74","Type":"ContainerDied","Data":"e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7"} Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514908 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qr4v8" Jan 30 18:01:48 crc kubenswrapper[4766]: I0130 18:01:48.514923 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75645429bf97d490c8a166c790ed1f2e6c9945b07977ae741786fb7f91fa0f7" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.176473 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:50 crc kubenswrapper[4766]: E0130 18:01:50.177397 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177411 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: E0130 18:01:50.177430 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177436 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177673 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" containerName="mariadb-account-create-update" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.177694 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" containerName="mariadb-database-create" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.178558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.180502 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nk49g" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.181309 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.190167 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238024 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238289 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.238478 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341350 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341493 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.341571 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.354439 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.354996 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.369537 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"heat-db-sync-276pq\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " pod="openstack/heat-db-sync-276pq" Jan 30 18:01:50 crc kubenswrapper[4766]: I0130 18:01:50.499558 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.005892 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.014014 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.040451 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:01:51 crc kubenswrapper[4766]: E0130 18:01:51.040668 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.414027 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:01:51 crc kubenswrapper[4766]: I0130 18:01:51.547951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerStarted","Data":"6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2"} Jan 30 18:01:53 crc kubenswrapper[4766]: I0130 18:01:53.858068 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:53 crc kubenswrapper[4766]: I0130 18:01:53.858575 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:01:58 crc kubenswrapper[4766]: I0130 18:01:58.610966 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerStarted","Data":"2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169"} Jan 30 18:01:58 crc kubenswrapper[4766]: I0130 18:01:58.629111 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-276pq" podStartSLOduration=1.680803206 podStartE2EDuration="8.629090784s" podCreationTimestamp="2026-01-30 18:01:50 +0000 UTC" firstStartedPulling="2026-01-30 18:01:51.01380312 +0000 UTC m=+5965.651760466" lastFinishedPulling="2026-01-30 18:01:57.962090698 +0000 UTC m=+5972.600048044" observedRunningTime="2026-01-30 18:01:58.627660116 +0000 UTC m=+5973.265617462" watchObservedRunningTime="2026-01-30 18:01:58.629090784 +0000 UTC m=+5973.267048130" Jan 30 18:02:00 crc kubenswrapper[4766]: I0130 18:02:00.646481 4766 generic.go:334] "Generic (PLEG): container finished" podID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerID="2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169" exitCode=0 Jan 30 18:02:00 crc kubenswrapper[4766]: I0130 18:02:00.646585 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerDied","Data":"2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169"} Jan 30 18:02:01 crc kubenswrapper[4766]: I0130 18:02:01.413743 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-646c4b5b47-xr8w7" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 30 18:02:01 crc kubenswrapper[4766]: I0130 18:02:01.413885 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.013111 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096554 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096681 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.096742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") pod \"05bc6794-04be-40f4-8fa7-552f45a104c0\" (UID: \"05bc6794-04be-40f4-8fa7-552f45a104c0\") " Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.104316 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss" (OuterVolumeSpecName: "kube-api-access-t9gss") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "kube-api-access-t9gss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.129339 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.182412 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data" (OuterVolumeSpecName: "config-data") pod "05bc6794-04be-40f4-8fa7-552f45a104c0" (UID: "05bc6794-04be-40f4-8fa7-552f45a104c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197907 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9gss\" (UniqueName: \"kubernetes.io/projected/05bc6794-04be-40f4-8fa7-552f45a104c0-kube-api-access-t9gss\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197953 4766 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.197966 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05bc6794-04be-40f4-8fa7-552f45a104c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667506 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-276pq" event={"ID":"05bc6794-04be-40f4-8fa7-552f45a104c0","Type":"ContainerDied","Data":"6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2"} Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667581 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e497384c613a0a27c47ed2ee415d94f97b14a0c4324f393f9886b6b4cf7c9b2" Jan 30 18:02:02 crc kubenswrapper[4766]: I0130 18:02:02.667605 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-276pq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.679459 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:03 crc kubenswrapper[4766]: E0130 18:02:03.680023 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.680045 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.680334 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" containerName="heat-db-sync" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.681005 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691049 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nk49g" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691337 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.691442 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.696740 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.842427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.842886 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.843036 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.843264 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.860845 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-757f4f657-jzgr8" podUID="f7b06d45-03c9-406f-8fc0-79428ec9de8f" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.896224 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.897902 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.901363 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.938234 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948051 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948358 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948456 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948565 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948714 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948796 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948880 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.948961 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.958342 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-combined-ca-bundle\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.962132 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.965530 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/364a6690-a249-4765-b86e-b72ca919edb8-config-data-custom\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.972991 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgdl\" (UniqueName: \"kubernetes.io/projected/364a6690-a249-4765-b86e-b72ca919edb8-kube-api-access-ttgdl\") pod \"heat-engine-54c46d7b9c-z94n2\" (UID: \"364a6690-a249-4765-b86e-b72ca919edb8\") " pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.973067 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.974796 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.984849 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 18:02:03 crc kubenswrapper[4766]: I0130 18:02:03.984841 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.025709 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.041001 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:02:04 crc kubenswrapper[4766]: E0130 18:02:04.041415 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.052942 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053017 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053060 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.053123 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.062804 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data-custom\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.067163 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-combined-ca-bundle\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.071374 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf86\" (UniqueName: \"kubernetes.io/projected/e11fd011-1725-4cdd-979f-75eecd0329b2-kube-api-access-lxf86\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.078220 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11fd011-1725-4cdd-979f-75eecd0329b2-config-data\") pod \"heat-api-675bcfc5ff-kvdtq\" (UID: \"e11fd011-1725-4cdd-979f-75eecd0329b2\") " pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155723 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155823 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.155865 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.156972 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.233207 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261884 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.261923 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.262015 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.281728 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data-custom\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.282063 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-config-data\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.288201 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhzc\" (UniqueName: \"kubernetes.io/projected/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-kube-api-access-ghhzc\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.297297 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f44ca0-52f4-4d4a-aeb8-18275fff50eb-combined-ca-bundle\") pod \"heat-cfnapi-675bf5dcf-ltj5r\" (UID: \"65f44ca0-52f4-4d4a-aeb8-18275fff50eb\") " pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.494757 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.611809 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54c46d7b9c-z94n2"] Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.711826 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54c46d7b9c-z94n2" event={"ID":"364a6690-a249-4765-b86e-b72ca919edb8","Type":"ContainerStarted","Data":"ce8ae0fc85c7a535dbaf17695e17b5e4a72cb6f4ebd4a6284d9c4fdd9ad9ad58"} Jan 30 18:02:04 crc kubenswrapper[4766]: I0130 18:02:04.843302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-675bcfc5ff-kvdtq"] Jan 30 18:02:04 crc kubenswrapper[4766]: W0130 18:02:04.845767 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode11fd011_1725_4cdd_979f_75eecd0329b2.slice/crio-357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b WatchSource:0}: Error finding container 357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b: Status 404 returned error can't find the container with id 357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.028956 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-675bf5dcf-ltj5r"] Jan 30 18:02:05 crc kubenswrapper[4766]: W0130 18:02:05.035475 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f44ca0_52f4_4d4a_aeb8_18275fff50eb.slice/crio-9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4 WatchSource:0}: Error finding container 9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4: Status 404 returned error can't find the container with id 9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4 Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.724587 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-675bcfc5ff-kvdtq" event={"ID":"e11fd011-1725-4cdd-979f-75eecd0329b2","Type":"ContainerStarted","Data":"357bb5060c9ec16c6953b7efbca1c60aa9cc61ba00658c85c5d8be5a6755233b"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.752297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" event={"ID":"65f44ca0-52f4-4d4a-aeb8-18275fff50eb","Type":"ContainerStarted","Data":"9098b323f1c0c9d25f39313a0ceb7a214fbd23cec025829cbf38752afebb54e4"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.754623 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54c46d7b9c-z94n2" event={"ID":"364a6690-a249-4765-b86e-b72ca919edb8","Type":"ContainerStarted","Data":"aa91cb66d167101f26864989ffb0150f0e46af0db8f973a9b047c4e90830006d"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.754760 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.758515 4766 generic.go:334] "Generic (PLEG): container finished" podID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerID="5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" exitCode=137 Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.758586 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4"} Jan 30 18:02:05 crc kubenswrapper[4766]: I0130 18:02:05.781279 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-54c46d7b9c-z94n2" podStartSLOduration=2.781259208 podStartE2EDuration="2.781259208s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:02:05.770821165 +0000 UTC m=+5980.408778511" watchObservedRunningTime="2026-01-30 18:02:05.781259208 +0000 UTC m=+5980.419216554" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.052849 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.134922 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135133 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135210 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135260 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.135326 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") pod \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\" (UID: \"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d\") " Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.141741 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs" (OuterVolumeSpecName: "logs") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.147852 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n" (OuterVolumeSpecName: "kube-api-access-zd65n") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "kube-api-access-zd65n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.148043 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.174042 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts" (OuterVolumeSpecName: "scripts") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.210233 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data" (OuterVolumeSpecName: "config-data") pod "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" (UID: "8806aa45-5ae9-453c-8bc8-23fe8daa8e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239544 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd65n\" (UniqueName: \"kubernetes.io/projected/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-kube-api-access-zd65n\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239860 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.239981 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.240067 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.240151 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.778391 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-646c4b5b47-xr8w7" event={"ID":"8806aa45-5ae9-453c-8bc8-23fe8daa8e9d","Type":"ContainerDied","Data":"4f401447cb213f1837b37ef48530e7e3b154870ca692e29ced373b3aa6253a8e"} Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.778427 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-646c4b5b47-xr8w7" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.779639 4766 scope.go:117] "RemoveContainer" containerID="cd3edfbcd9f13bf9b2e70c6b3b5b717d1cb225662e84f5a9a9139e0471a7a39b" Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.836763 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:02:06 crc kubenswrapper[4766]: I0130 18:02:06.849908 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-646c4b5b47-xr8w7"] Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.201687 4766 scope.go:117] "RemoveContainer" containerID="5a1b1f2fd93ecc065b4b50e7dd571ff4a7f60f4b4ce4f7d89d8895fe416e14e4" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.794073 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-675bcfc5ff-kvdtq" event={"ID":"e11fd011-1725-4cdd-979f-75eecd0329b2","Type":"ContainerStarted","Data":"056daf7f68fc1873b7c3f4bd33a7243161ce3a6b753d744707ed45ee0fb6cf0e"} Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.794478 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.799024 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" event={"ID":"65f44ca0-52f4-4d4a-aeb8-18275fff50eb","Type":"ContainerStarted","Data":"ac33631deed2079516e706577c495fb1391ab0237cb596d6f46246e62043f0d0"} Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.799364 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:07 crc kubenswrapper[4766]: I0130 18:02:07.843098 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-675bcfc5ff-kvdtq" podStartSLOduration=2.431763265 podStartE2EDuration="4.843074945s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="2026-01-30 18:02:04.848751007 +0000 UTC m=+5979.486708353" lastFinishedPulling="2026-01-30 18:02:07.260062687 +0000 UTC m=+5981.898020033" observedRunningTime="2026-01-30 18:02:07.821904298 +0000 UTC m=+5982.459861664" watchObservedRunningTime="2026-01-30 18:02:07.843074945 +0000 UTC m=+5982.481032291" Jan 30 18:02:08 crc kubenswrapper[4766]: I0130 18:02:08.053348 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" path="/var/lib/kubelet/pods/8806aa45-5ae9-453c-8bc8-23fe8daa8e9d/volumes" Jan 30 18:02:14 crc kubenswrapper[4766]: I0130 18:02:14.058865 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-54c46d7b9c-z94n2" Jan 30 18:02:14 crc kubenswrapper[4766]: I0130 18:02:14.077386 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" podStartSLOduration=8.855521919 podStartE2EDuration="11.077363317s" podCreationTimestamp="2026-01-30 18:02:03 +0000 UTC" firstStartedPulling="2026-01-30 18:02:05.038378564 +0000 UTC m=+5979.676335910" lastFinishedPulling="2026-01-30 18:02:07.260219962 +0000 UTC m=+5981.898177308" observedRunningTime="2026-01-30 18:02:07.846671083 +0000 UTC m=+5982.484628439" watchObservedRunningTime="2026-01-30 18:02:14.077363317 +0000 UTC m=+5988.715320663" Jan 30 18:02:15 crc kubenswrapper[4766]: I0130 18:02:15.648646 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-675bcfc5ff-kvdtq" Jan 30 18:02:16 crc kubenswrapper[4766]: I0130 18:02:16.021771 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:02:16 crc kubenswrapper[4766]: I0130 18:02:16.056891 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-675bf5dcf-ltj5r" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.039485 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.868446 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-757f4f657-jzgr8" Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.892267 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.952837 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.953073 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" containerID="cri-o://90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" gracePeriod=30 Jan 30 18:02:17 crc kubenswrapper[4766]: I0130 18:02:17.953217 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" containerID="cri-o://1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" gracePeriod=30 Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.895888 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.933879 4766 generic.go:334] "Generic (PLEG): container finished" podID="e24a2653-c901-4306-a56b-2e2de8006403" containerID="1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" exitCode=0 Jan 30 18:02:21 crc kubenswrapper[4766]: I0130 18:02:21.933933 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969"} Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.806884 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:24 crc kubenswrapper[4766]: E0130 18:02:24.807904 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.807917 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: E0130 18:02:24.807943 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.807949 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.808141 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.808165 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8806aa45-5ae9-453c-8bc8-23fe8daa8e9d" containerName="horizon-log" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.809740 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.815322 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.823063 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.956991 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.957062 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:24 crc kubenswrapper[4766]: I0130 18:02:24.957088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059533 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059605 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.059832 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.060157 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.060359 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.088557 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.139874 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.766588 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz"] Jan 30 18:02:25 crc kubenswrapper[4766]: W0130 18:02:25.774859 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8b8ccc_a37c_45d4_97e9_a3eb1bf7f951.slice/crio-72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9 WatchSource:0}: Error finding container 72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9: Status 404 returned error can't find the container with id 72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9 Jan 30 18:02:25 crc kubenswrapper[4766]: I0130 18:02:25.973553 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerStarted","Data":"72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9"} Jan 30 18:02:26 crc kubenswrapper[4766]: I0130 18:02:26.988606 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="7a7e6dc22b12132566cdb1f3372e28ef36a2890626fed1aeffe7f2d40e465b95" exitCode=0 Jan 30 18:02:26 crc kubenswrapper[4766]: I0130 18:02:26.988676 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"7a7e6dc22b12132566cdb1f3372e28ef36a2890626fed1aeffe7f2d40e465b95"} Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.061432 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.069463 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.078542 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-823f-account-create-update-pttr7"] Jan 30 18:02:27 crc kubenswrapper[4766]: I0130 18:02:27.087504 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tm7r5"] Jan 30 18:02:28 crc kubenswrapper[4766]: I0130 18:02:28.061024 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5946960e-4a1d-4360-ae75-7648934eeb0c" path="/var/lib/kubelet/pods/5946960e-4a1d-4360-ae75-7648934eeb0c/volumes" Jan 30 18:02:28 crc kubenswrapper[4766]: I0130 18:02:28.062842 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01e6326-2d83-4889-9b7a-f45b9f6f3063" path="/var/lib/kubelet/pods/f01e6326-2d83-4889-9b7a-f45b9f6f3063/volumes" Jan 30 18:02:30 crc kubenswrapper[4766]: I0130 18:02:30.020890 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="e1774fc0414c2122617112cd356061ed0dbd63ec7d27ae05b2ae3a89ad7e1ad4" exitCode=0 Jan 30 18:02:30 crc kubenswrapper[4766]: I0130 18:02:30.021488 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"e1774fc0414c2122617112cd356061ed0dbd63ec7d27ae05b2ae3a89ad7e1ad4"} Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.032366 4766 generic.go:334] "Generic (PLEG): container finished" podID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerID="1ef9355ea802924f0712dff7fbacf7e0ea64aef0a6e13c663de8f7b7767d1a2e" exitCode=0 Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.032416 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"1ef9355ea802924f0712dff7fbacf7e0ea64aef0a6e13c663de8f7b7767d1a2e"} Jan 30 18:02:31 crc kubenswrapper[4766]: I0130 18:02:31.895802 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.394231 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540416 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540495 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.540780 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") pod \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\" (UID: \"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951\") " Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.542843 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle" (OuterVolumeSpecName: "bundle") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.548372 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util" (OuterVolumeSpecName: "util") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.548742 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj" (OuterVolumeSpecName: "kube-api-access-jc6xj") pod "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" (UID: "1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951"). InnerVolumeSpecName "kube-api-access-jc6xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643421 4766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-util\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643470 4766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:32 crc kubenswrapper[4766]: I0130 18:02:32.643487 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6xj\" (UniqueName: \"kubernetes.io/projected/1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951-kube-api-access-jc6xj\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.055907 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" event={"ID":"1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951","Type":"ContainerDied","Data":"72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9"} Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.056622 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b79caf105f2e2df5c9fdd37b8df4ef9fb379dc9f2b059dc9a5873f6f5b74d9" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.056063 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.323446 4766 scope.go:117] "RemoveContainer" containerID="e819a03329a60f5f707891aab84349c260acf78c226512ac444ec14f902344ab" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.346002 4766 scope.go:117] "RemoveContainer" containerID="a53070aa7bf54f8e11851d2a42b467aeddd56da5149b02bbbe37c928d714291e" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.403498 4766 scope.go:117] "RemoveContainer" containerID="5dc0db8c133f2561de270e8d644a27c259f84f30c2c5e0b609690a8e3867c8ad" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.446160 4766 scope.go:117] "RemoveContainer" containerID="b4325ef51e7b158001efb6dda87f6f28be293ddce88e91cc9243a0d6ae57bb71" Jan 30 18:02:33 crc kubenswrapper[4766]: I0130 18:02:33.468757 4766 scope.go:117] "RemoveContainer" containerID="ee4c2e79057aa3b57922a39a79c5f1fe75768ec53755ad01f26f4a886101dcae" Jan 30 18:02:36 crc kubenswrapper[4766]: I0130 18:02:36.054021 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 18:02:36 crc kubenswrapper[4766]: I0130 18:02:36.062289 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ngkz2"] Jan 30 18:02:38 crc kubenswrapper[4766]: I0130 18:02:38.049576 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca69b03-2748-4111-8dd8-0cc28cf328d3" path="/var/lib/kubelet/pods/fca69b03-2748-4111-8dd8-0cc28cf328d3/volumes" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.894984 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.895601 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.922527 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923388 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923412 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923437 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="util" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923445 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="util" Jan 30 18:02:41 crc kubenswrapper[4766]: E0130 18:02:41.923476 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="pull" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923482 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="pull" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.923666 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951" containerName="extract" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.924392 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.926405 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-b8nc5" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.926800 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.928460 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 18:02:41 crc kubenswrapper[4766]: I0130 18:02:41.964907 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.035561 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.037203 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.040666 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zttmf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.040906 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.065639 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.066880 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.072447 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.088428 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.118650 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174352 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174418 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174458 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174494 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.174737 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.206836 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcpm\" (UniqueName: \"kubernetes.io/projected/ed5054c0-0009-40bb-8b4c-6e1a4da07b41-kube-api-access-hgcpm\") pod \"obo-prometheus-operator-68bc856cb9-npbz4\" (UID: \"ed5054c0-0009-40bb-8b4c-6e1a4da07b41\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.246912 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.267475 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.269198 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.271302 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-x6cmd" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.278504 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280566 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280715 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.280766 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.290256 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.290840 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.295781 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e9a3cc5-7614-4db3-8c5b-590bff436549-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-v5dzf\" (UID: \"4e9a3cc5-7614-4db3-8c5b-590bff436549\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.296817 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86dd422f-41b2-438f-9a62-e558efc71c90-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-946744c6d-qm4dx\" (UID: \"86dd422f-41b2-438f-9a62-e558efc71c90\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.324045 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.369643 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.382366 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.382423 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.400847 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.457951 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.465676 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.471070 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n9zjj" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.484387 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.486666 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.486741 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.517234 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.525095 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgcz\" (UniqueName: \"kubernetes.io/projected/ccbd3ff2-7dc6-488c-ae64-d0710464e20d-kube-api-access-cxgcz\") pod \"observability-operator-59bdc8b94-zbt8s\" (UID: \"ccbd3ff2-7dc6-488c-ae64-d0710464e20d\") " pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.589652 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.589846 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.682033 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.711543 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.711787 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.713307 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.734402 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhp4\" (UniqueName: \"kubernetes.io/projected/9f9dfe10-4d1d-4081-b3f3-4e7e4be37815-kube-api-access-fdhp4\") pod \"perses-operator-5bf474d74f-bgqzt\" (UID: \"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815\") " pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:42 crc kubenswrapper[4766]: I0130 18:02:42.823702 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.106515 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.120584 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.149476 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx"] Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.194975 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" event={"ID":"86dd422f-41b2-438f-9a62-e558efc71c90","Type":"ContainerStarted","Data":"387328638ab2dee923c355b91402386ac8a610a08ee2db55cac8a5fc4cf85fb2"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.196668 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" event={"ID":"ed5054c0-0009-40bb-8b4c-6e1a4da07b41","Type":"ContainerStarted","Data":"0400ba551869aca326a34e40e075f0e1333962d5a047499cc7cfe746b5606c79"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.198906 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" event={"ID":"4e9a3cc5-7614-4db3-8c5b-590bff436549","Type":"ContainerStarted","Data":"2c82d6e597c8b2c38e64083a01681e90133214195d1a69197b92310389ed04cc"} Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.292302 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbt8s"] Jan 30 18:02:43 crc kubenswrapper[4766]: W0130 18:02:43.293347 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccbd3ff2_7dc6_488c_ae64_d0710464e20d.slice/crio-9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab WatchSource:0}: Error finding container 9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab: Status 404 returned error can't find the container with id 9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab Jan 30 18:02:43 crc kubenswrapper[4766]: I0130 18:02:43.387509 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bgqzt"] Jan 30 18:02:43 crc kubenswrapper[4766]: W0130 18:02:43.471218 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9dfe10_4d1d_4081_b3f3_4e7e4be37815.slice/crio-23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053 WatchSource:0}: Error finding container 23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053: Status 404 returned error can't find the container with id 23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053 Jan 30 18:02:44 crc kubenswrapper[4766]: I0130 18:02:44.209165 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" event={"ID":"ccbd3ff2-7dc6-488c-ae64-d0710464e20d","Type":"ContainerStarted","Data":"9599fc1c8b935f03287cde6a3ecab6e7b16ef37431303a991da5b693cc226aab"} Jan 30 18:02:44 crc kubenswrapper[4766]: I0130 18:02:44.210944 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" event={"ID":"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815","Type":"ContainerStarted","Data":"23843974b68c10138420fe9bfa48afe8a2d36cb1976eadf30c0eb0b2fce0d053"} Jan 30 18:02:48 crc kubenswrapper[4766]: I0130 18:02:48.276136 4766 generic.go:334] "Generic (PLEG): container finished" podID="e24a2653-c901-4306-a56b-2e2de8006403" containerID="90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" exitCode=137 Jan 30 18:02:48 crc kubenswrapper[4766]: I0130 18:02:48.276332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217"} Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.206396 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352624 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352745 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352787 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352823 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.352934 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") pod \"e24a2653-c901-4306-a56b-2e2de8006403\" (UID: \"e24a2653-c901-4306-a56b-2e2de8006403\") " Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.353968 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs" (OuterVolumeSpecName: "logs") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.359046 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.362497 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl" (OuterVolumeSpecName: "kube-api-access-5hmkl") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "kube-api-access-5hmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.382077 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data" (OuterVolumeSpecName: "config-data") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.385899 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts" (OuterVolumeSpecName: "scripts") pod "e24a2653-c901-4306-a56b-2e2de8006403" (UID: "e24a2653-c901-4306-a56b-2e2de8006403"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.438885 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4d556457-cgwh5" event={"ID":"e24a2653-c901-4306-a56b-2e2de8006403","Type":"ContainerDied","Data":"cadf5bf4bc315740c9e7fe57dc7c31b825904f80226e6412c605c910373f6d91"} Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.438936 4766 scope.go:117] "RemoveContainer" containerID="1e480dbcd993b0ab6a788770045d86acbc61597646aa5360f9b83b164e59d969" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.439043 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4d556457-cgwh5" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454824 4766 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454864 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hmkl\" (UniqueName: \"kubernetes.io/projected/e24a2653-c901-4306-a56b-2e2de8006403-kube-api-access-5hmkl\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454877 4766 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e24a2653-c901-4306-a56b-2e2de8006403-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454888 4766 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e24a2653-c901-4306-a56b-2e2de8006403-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.454896 4766 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e24a2653-c901-4306-a56b-2e2de8006403-logs\") on node \"crc\" DevicePath \"\"" Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.496189 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.514808 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c4d556457-cgwh5"] Jan 30 18:02:56 crc kubenswrapper[4766]: I0130 18:02:56.896208 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c4d556457-cgwh5" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:02:58 crc kubenswrapper[4766]: I0130 18:02:58.058728 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24a2653-c901-4306-a56b-2e2de8006403" path="/var/lib/kubelet/pods/e24a2653-c901-4306-a56b-2e2de8006403/volumes" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.668169 4766 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.668742 4766 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxgcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-zbt8s_openshift-operators(ccbd3ff2-7dc6-488c-ae64-d0710464e20d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 18:03:00 crc kubenswrapper[4766]: E0130 18:03:00.669976 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podUID="ccbd3ff2-7dc6-488c-ae64-d0710464e20d" Jan 30 18:03:00 crc kubenswrapper[4766]: I0130 18:03:00.830746 4766 scope.go:117] "RemoveContainer" containerID="90394fdff017d58c0e8cd3327168199dc8c7d1df43cf284b9f898399e036a217" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.489922 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" event={"ID":"4e9a3cc5-7614-4db3-8c5b-590bff436549","Type":"ContainerStarted","Data":"a4b4c9f0f62679eca61ffd8170eae4fb7bc7caf251e227b23da83c9e910015dc"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.493328 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" event={"ID":"86dd422f-41b2-438f-9a62-e558efc71c90","Type":"ContainerStarted","Data":"5452329c351a963d4673f268bf0c5fe3355507b6b8abdd7a487e84d95b559d3e"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.495524 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" event={"ID":"9f9dfe10-4d1d-4081-b3f3-4e7e4be37815","Type":"ContainerStarted","Data":"6b572102515489d3b29317cf517ffab36ffdaeb05ba662b93e01076576fee807"} Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.495661 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.497311 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" event={"ID":"ed5054c0-0009-40bb-8b4c-6e1a4da07b41","Type":"ContainerStarted","Data":"56179728a168c33618e889a6f300e5a6335a23cda4413e7bc85f27223ddcd3ef"} Jan 30 18:03:01 crc kubenswrapper[4766]: E0130 18:03:01.498665 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podUID="ccbd3ff2-7dc6-488c-ae64-d0710464e20d" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.521963 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-v5dzf" podStartSLOduration=1.834528229 podStartE2EDuration="19.521941771s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.145472862 +0000 UTC m=+6017.783430208" lastFinishedPulling="2026-01-30 18:03:00.832886404 +0000 UTC m=+6035.470843750" observedRunningTime="2026-01-30 18:03:01.517778877 +0000 UTC m=+6036.155736223" watchObservedRunningTime="2026-01-30 18:03:01.521941771 +0000 UTC m=+6036.159899117" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.590560 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" podStartSLOduration=2.249317064 podStartE2EDuration="19.59053873s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.486972969 +0000 UTC m=+6018.124930315" lastFinishedPulling="2026-01-30 18:03:00.828194635 +0000 UTC m=+6035.466151981" observedRunningTime="2026-01-30 18:03:01.584222278 +0000 UTC m=+6036.222179624" watchObservedRunningTime="2026-01-30 18:03:01.59053873 +0000 UTC m=+6036.228496076" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.617543 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-npbz4" podStartSLOduration=2.919334992 podStartE2EDuration="20.617522026s" podCreationTimestamp="2026-01-30 18:02:41 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.129976971 +0000 UTC m=+6017.767934317" lastFinishedPulling="2026-01-30 18:03:00.828164005 +0000 UTC m=+6035.466121351" observedRunningTime="2026-01-30 18:03:01.605511479 +0000 UTC m=+6036.243468825" watchObservedRunningTime="2026-01-30 18:03:01.617522026 +0000 UTC m=+6036.255479372" Jan 30 18:03:01 crc kubenswrapper[4766]: I0130 18:03:01.651832 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-946744c6d-qm4dx" podStartSLOduration=1.985109754 podStartE2EDuration="19.65180665s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.164029739 +0000 UTC m=+6017.801987085" lastFinishedPulling="2026-01-30 18:03:00.830726635 +0000 UTC m=+6035.468683981" observedRunningTime="2026-01-30 18:03:01.632584906 +0000 UTC m=+6036.270542252" watchObservedRunningTime="2026-01-30 18:03:01.65180665 +0000 UTC m=+6036.289763996" Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.056913 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.069275 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.078707 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4b67-account-create-update-85sd5"] Jan 30 18:03:05 crc kubenswrapper[4766]: I0130 18:03:05.092367 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5n9p6"] Jan 30 18:03:06 crc kubenswrapper[4766]: I0130 18:03:06.055593 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ade9e5-b989-431e-995d-1dec1432ed75" path="/var/lib/kubelet/pods/03ade9e5-b989-431e-995d-1dec1432ed75/volumes" Jan 30 18:03:06 crc kubenswrapper[4766]: I0130 18:03:06.057022 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb39a90f-2911-4e3f-a034-025eb6f8077d" path="/var/lib/kubelet/pods/cb39a90f-2911-4e3f-a034-025eb6f8077d/volumes" Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.057732 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.063133 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hn8dr"] Jan 30 18:03:12 crc kubenswrapper[4766]: I0130 18:03:12.828874 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-bgqzt" Jan 30 18:03:14 crc kubenswrapper[4766]: I0130 18:03:14.068268 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d89feb8-9495-4c8a-a424-37720df352bb" path="/var/lib/kubelet/pods/2d89feb8-9495-4c8a-a424-37720df352bb/volumes" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.655526 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" event={"ID":"ccbd3ff2-7dc6-488c-ae64-d0710464e20d","Type":"ContainerStarted","Data":"75bae606af4c056e6d449ad5f7341e03863b098a11a3c404d1fa28d730b4a928"} Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.657591 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.682396 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" podStartSLOduration=1.993970856 podStartE2EDuration="33.682374917s" podCreationTimestamp="2026-01-30 18:02:42 +0000 UTC" firstStartedPulling="2026-01-30 18:02:43.298486683 +0000 UTC m=+6017.936444029" lastFinishedPulling="2026-01-30 18:03:14.986890744 +0000 UTC m=+6049.624848090" observedRunningTime="2026-01-30 18:03:15.675314184 +0000 UTC m=+6050.313271560" watchObservedRunningTime="2026-01-30 18:03:15.682374917 +0000 UTC m=+6050.320332263" Jan 30 18:03:15 crc kubenswrapper[4766]: I0130 18:03:15.710655 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-zbt8s" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.281608 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.282314 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" containerID="cri-o://4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" gracePeriod=2 Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.314924 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.345454 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.345979 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346002 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.346021 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346030 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: E0130 18:03:18.346051 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346059 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346332 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346352 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerName="openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.346378 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24a2653-c901-4306-a56b-2e2de8006403" containerName="horizon-log" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.347420 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.351212 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" podUID="1f134cd2-6d22-47cd-9ef6-bfdda2701067" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.360895 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.487979 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.488089 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.488266 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.503877 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.509289 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.512466 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6lq2r" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.522649 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.591932 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592044 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592304 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.592388 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.593204 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.610107 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f134cd2-6d22-47cd-9ef6-bfdda2701067-openstack-config-secret\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.642429 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfw7\" (UniqueName: \"kubernetes.io/projected/1f134cd2-6d22-47cd-9ef6-bfdda2701067-kube-api-access-gzfw7\") pod \"openstackclient\" (UID: \"1f134cd2-6d22-47cd-9ef6-bfdda2701067\") " pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.680380 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.699531 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.732009 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2qk\" (UniqueName: \"kubernetes.io/projected/899280ca-43e9-46f7-8204-a90e682a0656-kube-api-access-ld2qk\") pod \"kube-state-metrics-0\" (UID: \"899280ca-43e9-46f7-8204-a90e682a0656\") " pod="openstack/kube-state-metrics-0" Jan 30 18:03:18 crc kubenswrapper[4766]: I0130 18:03:18.834856 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.729916 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.745971 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.753936 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754219 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754334 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754431 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.754545 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-4ncrl" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.819756 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="7c586850-0ed6-4949-9087-0e66405455ce" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.868310 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.883841 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.883993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884088 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884143 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884285 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884337 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.884424 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985662 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985726 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985767 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985817 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985851 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985889 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.985916 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:19 crc kubenswrapper[4766]: I0130 18:03:19.993037 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.003930 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.017038 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.023709 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9044d49e-1762-437b-86a3-8697b46a1930-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.024103 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.025616 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9044d49e-1762-437b-86a3-8697b46a1930-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.046662 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.056717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztt7g\" (UniqueName: \"kubernetes.io/projected/9044d49e-1762-437b-86a3-8697b46a1930-kube-api-access-ztt7g\") pod \"alertmanager-metric-storage-0\" (UID: \"9044d49e-1762-437b-86a3-8697b46a1930\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.084050 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.099421 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.102161 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.121753 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132513 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132734 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132849 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.132944 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133032 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133243 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkx8g" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.133878 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.143800 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.229528 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.303993 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.304359 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311389 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311471 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311495 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311551 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311576 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311595 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311667 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.311800 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415362 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415417 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415449 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415478 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415496 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415521 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415539 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415559 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415592 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.415644 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422133 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422756 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.422806 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/23ec4e7c-3732-4892-897e-5b2a5e7c2577-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.429019 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.432630 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.432737 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.435153 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.435682 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23ec4e7c-3732-4892-897e-5b2a5e7c2577-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.444970 4766 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.445014 4766 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/23096af48abc74568aa15792c175d7579a11f0188cc4a814c54861f42a908f6a/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:20 crc kubenswrapper[4766]: I0130 18:03:20.459224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd9sg\" (UniqueName: \"kubernetes.io/projected/23ec4e7c-3732-4892-897e-5b2a5e7c2577-kube-api-access-gd9sg\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.801693 4766 generic.go:334] "Generic (PLEG): container finished" podID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" containerID="4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" exitCode=137 Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.823237 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1f134cd2-6d22-47cd-9ef6-bfdda2701067","Type":"ContainerStarted","Data":"f2a0af1294bb6f2a78b9d34acd4ababe565c7e2427a063600bdad05c7e0d2dbb"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.825058 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"899280ca-43e9-46f7-8204-a90e682a0656","Type":"ContainerStarted","Data":"7d59b6622e07dc85bd5da35ec81c6d6cd23bfd09f4a7e92d9da60c9b4860bd55"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:20.832678 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5630dc0f-887c-4665-b758-42f6ff12d1dd\") pod \"prometheus-metric-storage-0\" (UID: \"23ec4e7c-3732-4892-897e-5b2a5e7c2577\") " pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.065688 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.113155 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.274648 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357700 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357742 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.357993 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") pod \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\" (UID: \"c0b97605-5664-4ae7-a15d-26b0ae7b4614\") " Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.368517 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc" (OuterVolumeSpecName: "kube-api-access-sgzlc") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "kube-api-access-sgzlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.439293 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.472995 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgzlc\" (UniqueName: \"kubernetes.io/projected/c0b97605-5664-4ae7-a15d-26b0ae7b4614-kube-api-access-sgzlc\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.473016 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.477064 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c0b97605-5664-4ae7-a15d-26b0ae7b4614" (UID: "c0b97605-5664-4ae7-a15d-26b0ae7b4614"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.575359 4766 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b97605-5664-4ae7-a15d-26b0ae7b4614-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.867054 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"1898ea85e5e9452cca0b95051d4d6b4bc3c0f96cfebc8b613d00e6b77376b379"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.883712 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"899280ca-43e9-46f7-8204-a90e682a0656","Type":"ContainerStarted","Data":"22f2e4e745f0e1c079977c162ac07934d21a9115853257f65d22002b82a4068a"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.885097 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.890667 4766 scope.go:117] "RemoveContainer" containerID="4d5a385a379300f1667fee7b30c6a58a29d62b44dc31d6716fcde576f98cfadd" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.890854 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.904004 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1f134cd2-6d22-47cd-9ef6-bfdda2701067","Type":"ContainerStarted","Data":"16010720370fa2d9c8c37d5f967c4342d33eafa51ad9fb338b254ae7e5a68eca"} Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.907915 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.360499651 podStartE2EDuration="3.907893968s" podCreationTimestamp="2026-01-30 18:03:18 +0000 UTC" firstStartedPulling="2026-01-30 18:03:19.977354639 +0000 UTC m=+6054.615311985" lastFinishedPulling="2026-01-30 18:03:20.524748966 +0000 UTC m=+6055.162706302" observedRunningTime="2026-01-30 18:03:21.901290469 +0000 UTC m=+6056.539247825" watchObservedRunningTime="2026-01-30 18:03:21.907893968 +0000 UTC m=+6056.545851314" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.944300 4766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" podUID="1f134cd2-6d22-47cd-9ef6-bfdda2701067" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.947878 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.9478511469999997 podStartE2EDuration="3.947851147s" podCreationTimestamp="2026-01-30 18:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:03:21.935667715 +0000 UTC m=+6056.573625071" watchObservedRunningTime="2026-01-30 18:03:21.947851147 +0000 UTC m=+6056.585808493" Jan 30 18:03:21 crc kubenswrapper[4766]: I0130 18:03:21.984473 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 18:03:22 crc kubenswrapper[4766]: I0130 18:03:22.052195 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b97605-5664-4ae7-a15d-26b0ae7b4614" path="/var/lib/kubelet/pods/c0b97605-5664-4ae7-a15d-26b0ae7b4614/volumes" Jan 30 18:03:22 crc kubenswrapper[4766]: I0130 18:03:22.924864 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"acc95510b6dca7728620f801b6294cdc09c765cee3ff5c480b6293df58bcd009"} Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.840448 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.985772 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5"} Jan 30 18:03:28 crc kubenswrapper[4766]: I0130 18:03:28.994619 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb"} Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.631681 4766 scope.go:117] "RemoveContainer" containerID="cbcf29702f59854ea3bf4dbf2361e9f8a36e31bd05f0bda1d36ac83ec37ad3db" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.677925 4766 scope.go:117] "RemoveContainer" containerID="c9458198dfab56b6f64fbd05b1295b35eb049ea1af74a3aa668d258a59d21ba1" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.747240 4766 scope.go:117] "RemoveContainer" containerID="8866b78d897067600b584d9dee594c511c5628be20331b784f3c260d8792a78a" Jan 30 18:03:33 crc kubenswrapper[4766]: I0130 18:03:33.792721 4766 scope.go:117] "RemoveContainer" containerID="5e5b530396781526c9ca9c2a003890cd79c6f57ae8a59f2f830e10a2d58434d2" Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.056397 4766 generic.go:334] "Generic (PLEG): container finished" podID="9044d49e-1762-437b-86a3-8697b46a1930" containerID="631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb" exitCode=0 Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.056513 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerDied","Data":"631f4fbac2c6779d2988780304de813a617653c2010046896cdf02ea90a344eb"} Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.059831 4766 generic.go:334] "Generic (PLEG): container finished" podID="23ec4e7c-3732-4892-897e-5b2a5e7c2577" containerID="78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5" exitCode=0 Jan 30 18:03:35 crc kubenswrapper[4766]: I0130 18:03:35.059888 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerDied","Data":"78a718b6f2340c7b2d5233090bbe962b124e74422c8187153cc73c85bb7f71d5"} Jan 30 18:03:38 crc kubenswrapper[4766]: I0130 18:03:38.086468 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"8ae39afb14dea25b6a784ad28515ded29ebdd679268c89dea22b469e2544719f"} Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.126465 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"9044d49e-1762-437b-86a3-8697b46a1930","Type":"ContainerStarted","Data":"c828660430711f040dc96e08b5dc57a0461147cc0fd0dcc324aa8163e2d939db"} Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.127058 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.130521 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 18:03:41 crc kubenswrapper[4766]: I0130 18:03:41.206611 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.740548211 podStartE2EDuration="22.206581058s" podCreationTimestamp="2026-01-30 18:03:19 +0000 UTC" firstStartedPulling="2026-01-30 18:03:21.1663531 +0000 UTC m=+6055.804310446" lastFinishedPulling="2026-01-30 18:03:37.632385947 +0000 UTC m=+6072.270343293" observedRunningTime="2026-01-30 18:03:41.156688518 +0000 UTC m=+6075.794645874" watchObservedRunningTime="2026-01-30 18:03:41.206581058 +0000 UTC m=+6075.844538404" Jan 30 18:03:42 crc kubenswrapper[4766]: I0130 18:03:42.141402 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"9e1fd5e99ee5a0c07c68746f082d9c7487e6e422dcf2c09c82acf1464be6c561"} Jan 30 18:03:45 crc kubenswrapper[4766]: I0130 18:03:45.170358 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"2b3bdeb423522d96560be35508f8d5157f86aae69539a8d2200cec35cee94304"} Jan 30 18:03:50 crc kubenswrapper[4766]: I0130 18:03:50.214694 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"23ec4e7c-3732-4892-897e-5b2a5e7c2577","Type":"ContainerStarted","Data":"a01305a117c7c00b3a3ee7d158ae40f0680fffda85f22ea45e5c306cd84570c2"} Jan 30 18:03:50 crc kubenswrapper[4766]: I0130 18:03:50.259014 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.089224411 podStartE2EDuration="31.258990875s" podCreationTimestamp="2026-01-30 18:03:19 +0000 UTC" firstStartedPulling="2026-01-30 18:03:22.004970994 +0000 UTC m=+6056.642928340" lastFinishedPulling="2026-01-30 18:03:49.174737458 +0000 UTC m=+6083.812694804" observedRunningTime="2026-01-30 18:03:50.242752993 +0000 UTC m=+6084.880710359" watchObservedRunningTime="2026-01-30 18:03:50.258990875 +0000 UTC m=+6084.896948221" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.066489 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.066881 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.068994 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:51 crc kubenswrapper[4766]: I0130 18:03:51.223748 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.910062 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.913199 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.920659 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.920782 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.923060 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992798 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992868 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992916 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992942 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992962 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.992986 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:52 crc kubenswrapper[4766]: I0130 18:03:52.993026 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095476 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095562 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095618 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095646 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095673 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.095743 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.096452 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.096577 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.102717 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.102902 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.107590 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-config-data\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.108182 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-scripts\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.125224 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z6t\" (UniqueName: \"kubernetes.io/projected/464bbfb2-a15f-4b08-85d1-bc0fe536c6d7-kube-api-access-l4z6t\") pod \"ceilometer-0\" (UID: \"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7\") " pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.294931 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:03:53 crc kubenswrapper[4766]: I0130 18:03:53.830503 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:03:53 crc kubenswrapper[4766]: W0130 18:03:53.839509 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464bbfb2_a15f_4b08_85d1_bc0fe536c6d7.slice/crio-8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b WatchSource:0}: Error finding container 8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b: Status 404 returned error can't find the container with id 8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b Jan 30 18:03:54 crc kubenswrapper[4766]: I0130 18:03:54.247757 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"8d1404adb469085c84a1918c5fef74149eb84cc043143d7b2c4feb551e5afc6b"} Jan 30 18:03:56 crc kubenswrapper[4766]: I0130 18:03:56.266570 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"7235f146f65a0f3fe18a4b3bc30ab7388c6bb2a3e5cc6f5d1bcd61d01098b740"} Jan 30 18:03:57 crc kubenswrapper[4766]: I0130 18:03:57.277684 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"0da1ca76997f8dd0abcc2238713676579311a21c6015f851d7ead0458d1ab65a"} Jan 30 18:03:58 crc kubenswrapper[4766]: I0130 18:03:58.287330 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"b364bfa9b39865e084a9fb7492117ebd9d5a37c920d882b4cf48f6dc5b4e57ec"} Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.367421 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464bbfb2-a15f-4b08-85d1-bc0fe536c6d7","Type":"ContainerStarted","Data":"c670836f02038ad7c0a2351f1649a016b515672da0963da6e735e03c6bbe5ef3"} Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.368328 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 18:04:07 crc kubenswrapper[4766]: I0130 18:04:07.401989 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.062006504 podStartE2EDuration="15.401961341s" podCreationTimestamp="2026-01-30 18:03:52 +0000 UTC" firstStartedPulling="2026-01-30 18:03:53.84267405 +0000 UTC m=+6088.480631386" lastFinishedPulling="2026-01-30 18:04:06.182628867 +0000 UTC m=+6100.820586223" observedRunningTime="2026-01-30 18:04:07.392692047 +0000 UTC m=+6102.030649393" watchObservedRunningTime="2026-01-30 18:04:07.401961341 +0000 UTC m=+6102.039918717" Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.052253 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.060833 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.072514 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hkg9q"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.082899 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.091713 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4207-account-create-update-5677m"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.101225 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.109599 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hsbm5"] Jan 30 18:04:12 crc kubenswrapper[4766]: I0130 18:04:12.118739 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dwwb9"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.031316 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.043335 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4379-account-create-update-xxk7g"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.053041 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 18:04:13 crc kubenswrapper[4766]: I0130 18:04:13.065237 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1549-account-create-update-qksfj"] Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.081198 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cd48e2-831c-4067-ae82-6aa11c3ed219" path="/var/lib/kubelet/pods/03cd48e2-831c-4067-ae82-6aa11c3ed219/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.081812 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230985b1-39a5-440c-b67a-97bed8481bd6" path="/var/lib/kubelet/pods/230985b1-39a5-440c-b67a-97bed8481bd6/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.082724 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e9cfc2-7b7d-47eb-aece-ed9fe716594a" path="/var/lib/kubelet/pods/a8e9cfc2-7b7d-47eb-aece-ed9fe716594a/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.083319 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa501cc-1f23-4a0c-b845-31c9ae218be6" path="/var/lib/kubelet/pods/caa501cc-1f23-4a0c-b845-31c9ae218be6/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.084326 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2114339-89f3-4232-94e1-d4323d23978b" path="/var/lib/kubelet/pods/e2114339-89f3-4232-94e1-d4323d23978b/volumes" Jan 30 18:04:14 crc kubenswrapper[4766]: I0130 18:04:14.084864 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda85bd2-cef5-4dba-b322-a9f16aced872" path="/var/lib/kubelet/pods/eda85bd2-cef5-4dba-b322-a9f16aced872/volumes" Jan 30 18:04:23 crc kubenswrapper[4766]: I0130 18:04:23.311092 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 18:04:24 crc kubenswrapper[4766]: I0130 18:04:24.065246 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 18:04:24 crc kubenswrapper[4766]: I0130 18:04:24.121207 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jccb8"] Jan 30 18:04:26 crc kubenswrapper[4766]: I0130 18:04:26.051272 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37a2812-82ad-4535-84e6-569f9b3765a6" path="/var/lib/kubelet/pods/b37a2812-82ad-4535-84e6-569f9b3765a6/volumes" Jan 30 18:04:33 crc kubenswrapper[4766]: I0130 18:04:33.999342 4766 scope.go:117] "RemoveContainer" containerID="3e558c3b2bd50c7543806cf36f97bd5a41e96ea64aaa7d83bb37281ff7150079" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.026256 4766 scope.go:117] "RemoveContainer" containerID="4ceebfac5a0b227e854681a12bc5a1070dab4586e24997f6e4a7f702a9563e66" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.564077 4766 scope.go:117] "RemoveContainer" containerID="b484886b7344df11c7a295d1efb6eeefa526673bc8fccf2d500d87883c528256" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.635918 4766 scope.go:117] "RemoveContainer" containerID="69d76b9aa9a9c3d7d1a5e0b77ed7034745afa17d311bd1f48a0c475c88982f61" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.660825 4766 scope.go:117] "RemoveContainer" containerID="84255a253283b95cc39831e777619bfbcbdd030c283ced85e388fb2e68a58195" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.704510 4766 scope.go:117] "RemoveContainer" containerID="afbdcdecad349aa223b487405699fc3f46bcbef54133e0b074eec4a93f302638" Jan 30 18:04:34 crc kubenswrapper[4766]: I0130 18:04:34.759028 4766 scope.go:117] "RemoveContainer" containerID="c11a5160103bd776a6a5d2558dca488af7e839c269a24583ddad14de582e241f" Jan 30 18:04:37 crc kubenswrapper[4766]: I0130 18:04:37.072032 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 18:04:37 crc kubenswrapper[4766]: I0130 18:04:37.132736 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-247jx"] Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.030641 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.051511 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202a732a-6c9d-427a-9c87-af7c4af5d184" path="/var/lib/kubelet/pods/202a732a-6c9d-427a-9c87-af7c4af5d184/volumes" Jan 30 18:04:38 crc kubenswrapper[4766]: I0130 18:04:38.052574 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5xsrx"] Jan 30 18:04:39 crc kubenswrapper[4766]: I0130 18:04:39.045198 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:04:39 crc kubenswrapper[4766]: I0130 18:04:39.045524 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:04:40 crc kubenswrapper[4766]: I0130 18:04:40.051901 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083bdb6d-c3f3-412d-9097-48e66c7f28d0" path="/var/lib/kubelet/pods/083bdb6d-c3f3-412d-9097-48e66c7f28d0/volumes" Jan 30 18:04:57 crc kubenswrapper[4766]: I0130 18:04:57.036979 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 18:04:57 crc kubenswrapper[4766]: I0130 18:04:57.049935 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nfnj2"] Jan 30 18:04:58 crc kubenswrapper[4766]: I0130 18:04:58.051379 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018ff185-8917-437b-9c5a-ec143d1fc84a" path="/var/lib/kubelet/pods/018ff185-8917-437b-9c5a-ec143d1fc84a/volumes" Jan 30 18:05:09 crc kubenswrapper[4766]: I0130 18:05:09.045521 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:05:09 crc kubenswrapper[4766]: I0130 18:05:09.046073 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:05:34 crc kubenswrapper[4766]: I0130 18:05:34.922723 4766 scope.go:117] "RemoveContainer" containerID="1027fcfd70b26fa66fbb26590d7374bf1ac4b410943bffac851c340bb52079f0" Jan 30 18:05:34 crc kubenswrapper[4766]: I0130 18:05:34.973302 4766 scope.go:117] "RemoveContainer" containerID="a0f13e7a67d3cb517e1228d6222bbee0f7e7c79bd8b7aaaddf752c4e348579af" Jan 30 18:05:35 crc kubenswrapper[4766]: I0130 18:05:35.018727 4766 scope.go:117] "RemoveContainer" containerID="622b9b57d1c8ffadafcb076f305a5bdc22e042ba182b300a03ff05dbcdcc46b3" Jan 30 18:05:35 crc kubenswrapper[4766]: I0130 18:05:35.047834 4766 scope.go:117] "RemoveContainer" containerID="5aac27e83d1cb5ca2446b49d301ad805fafea78ed00e6ab9d06fdf982c7ca496" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.045663 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.046148 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.046234 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.047446 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.047551 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" gracePeriod=600 Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.065105 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.078380 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b2b1-account-create-update-vjtsm"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.086790 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.094768 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h7zjx"] Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315636 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" exitCode=0 Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315681 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098"} Jan 30 18:05:39 crc kubenswrapper[4766]: I0130 18:05:39.315718 4766 scope.go:117] "RemoveContainer" containerID="e52209522ad6ff86d8d333246ee724873200c61e028d93d4fc612ac4b0977354" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.050488 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3c8440-d3be-418a-a446-f3f592a864bd" path="/var/lib/kubelet/pods/3f3c8440-d3be-418a-a446-f3f592a864bd/volumes" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.051964 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912d4cef-a7f3-40a4-b498-f1da7361a15c" path="/var/lib/kubelet/pods/912d4cef-a7f3-40a4-b498-f1da7361a15c/volumes" Jan 30 18:05:40 crc kubenswrapper[4766]: I0130 18:05:40.326565 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} Jan 30 18:05:46 crc kubenswrapper[4766]: I0130 18:05:46.038534 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 18:05:46 crc kubenswrapper[4766]: I0130 18:05:46.056652 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7fd4h"] Jan 30 18:05:48 crc kubenswrapper[4766]: I0130 18:05:48.056054 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d92cbfe-71f2-4dc5-981b-0c52c1169a2d" path="/var/lib/kubelet/pods/7d92cbfe-71f2-4dc5-981b-0c52c1169a2d/volumes" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.208444 4766 scope.go:117] "RemoveContainer" containerID="d2335e8782f353fb6442350bea576a44e02bef8eea5ae5d217798cc04d676963" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.252934 4766 scope.go:117] "RemoveContainer" containerID="7890c44e699b67486d1b5e46be24d9577006c39ba9eaa68133e8d00b60940bba" Jan 30 18:06:35 crc kubenswrapper[4766]: I0130 18:06:35.293252 4766 scope.go:117] "RemoveContainer" containerID="9bcd8e7065331188bb35aae678322da7e0860c541ad8d16bf36d90aeac08ac0d" Jan 30 18:07:39 crc kubenswrapper[4766]: I0130 18:07:39.045169 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:07:39 crc kubenswrapper[4766]: I0130 18:07:39.045740 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.350270 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.355538 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.360447 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425599 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425810 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.425880 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527655 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527774 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.527801 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.528592 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.528704 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.548263 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"redhat-operators-7fvqb\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:01 crc kubenswrapper[4766]: I0130 18:08:01.674435 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.188338 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.721119 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" exitCode=0 Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.721230 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2"} Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.722434 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"adbc220b2deb8c6b2c23f688ed49bbfcc93a05709363d598129b754f45c43c1c"} Jan 30 18:08:02 crc kubenswrapper[4766]: I0130 18:08:02.724836 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:08:05 crc kubenswrapper[4766]: I0130 18:08:05.750071 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} Jan 30 18:08:09 crc kubenswrapper[4766]: I0130 18:08:09.045705 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:08:09 crc kubenswrapper[4766]: I0130 18:08:09.046030 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:13 crc kubenswrapper[4766]: I0130 18:08:13.847746 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" exitCode=0 Jan 30 18:08:13 crc kubenswrapper[4766]: I0130 18:08:13.847835 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} Jan 30 18:08:14 crc kubenswrapper[4766]: I0130 18:08:14.859951 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerStarted","Data":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} Jan 30 18:08:14 crc kubenswrapper[4766]: I0130 18:08:14.886374 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fvqb" podStartSLOduration=2.308503745 podStartE2EDuration="13.886352383s" podCreationTimestamp="2026-01-30 18:08:01 +0000 UTC" firstStartedPulling="2026-01-30 18:08:02.724629037 +0000 UTC m=+6337.362586383" lastFinishedPulling="2026-01-30 18:08:14.302477675 +0000 UTC m=+6348.940435021" observedRunningTime="2026-01-30 18:08:14.881574513 +0000 UTC m=+6349.519531869" watchObservedRunningTime="2026-01-30 18:08:14.886352383 +0000 UTC m=+6349.524309739" Jan 30 18:08:21 crc kubenswrapper[4766]: I0130 18:08:21.675346 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:21 crc kubenswrapper[4766]: I0130 18:08:21.675931 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:22 crc kubenswrapper[4766]: I0130 18:08:22.724710 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:08:22 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 18:08:22 crc kubenswrapper[4766]: > Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.049119 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.051629 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.063119 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146206 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146282 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.146427 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.248972 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249437 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249477 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249516 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.249726 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.270499 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"redhat-marketplace-7m77k\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.380288 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.931823 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:25 crc kubenswrapper[4766]: I0130 18:08:25.952640 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"9f20e475f5a6055842a49abfae865ce70cc486c8717f74afea1dbb07ab14232e"} Jan 30 18:08:26 crc kubenswrapper[4766]: I0130 18:08:26.963558 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623" exitCode=0 Jan 30 18:08:26 crc kubenswrapper[4766]: I0130 18:08:26.963611 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623"} Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.056188 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.056524 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-d22q5"] Jan 30 18:08:28 crc kubenswrapper[4766]: I0130 18:08:28.989118 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c"} Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.000683 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c" exitCode=0 Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.000797 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c"} Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.036323 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.051264 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944d7612-c3af-4bbd-b193-a2769b8d362d" path="/var/lib/kubelet/pods/944d7612-c3af-4bbd-b193-a2769b8d362d/volumes" Jan 30 18:08:30 crc kubenswrapper[4766]: I0130 18:08:30.052139 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-c8b6-account-create-update-vqz78"] Jan 30 18:08:31 crc kubenswrapper[4766]: I0130 18:08:31.013934 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerStarted","Data":"4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8"} Jan 30 18:08:31 crc kubenswrapper[4766]: I0130 18:08:31.034696 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m77k" podStartSLOduration=2.506141884 podStartE2EDuration="6.034677363s" podCreationTimestamp="2026-01-30 18:08:25 +0000 UTC" firstStartedPulling="2026-01-30 18:08:26.965635125 +0000 UTC m=+6361.603592481" lastFinishedPulling="2026-01-30 18:08:30.494170614 +0000 UTC m=+6365.132127960" observedRunningTime="2026-01-30 18:08:31.030100447 +0000 UTC m=+6365.668057793" watchObservedRunningTime="2026-01-30 18:08:31.034677363 +0000 UTC m=+6365.672634709" Jan 30 18:08:32 crc kubenswrapper[4766]: I0130 18:08:32.050642 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc91a16-cfbf-425d-bca1-f23f53f60beb" path="/var/lib/kubelet/pods/6fc91a16-cfbf-425d-bca1-f23f53f60beb/volumes" Jan 30 18:08:32 crc kubenswrapper[4766]: I0130 18:08:32.730542 4766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:08:32 crc kubenswrapper[4766]: timeout: failed to connect service ":50051" within 1s Jan 30 18:08:32 crc kubenswrapper[4766]: > Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.110661 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.120028 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-v77vj"] Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.381379 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.381443 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.413564 4766 scope.go:117] "RemoveContainer" containerID="c63229617d55f96821911e32ef6a34d5a26df3748957060c5998ef3872acbfa5" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.438592 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:35 crc kubenswrapper[4766]: I0130 18:08:35.452283 4766 scope.go:117] "RemoveContainer" containerID="82482b6c103da4e33a65a68c2aa8077854641cba347d1131ff453c1ad0a27d26" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.031379 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.118013 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0550f6c1-ed1f-405f-8420-507890f13d75" path="/var/lib/kubelet/pods/0550f6c1-ed1f-405f-8420-507890f13d75/volumes" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.120583 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-1019-account-create-update-skkw9"] Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.199780 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:36 crc kubenswrapper[4766]: I0130 18:08:36.249527 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:38 crc kubenswrapper[4766]: I0130 18:08:38.051834 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c327fe8-260c-4117-b55e-3612be41da79" path="/var/lib/kubelet/pods/0c327fe8-260c-4117-b55e-3612be41da79/volumes" Jan 30 18:08:38 crc kubenswrapper[4766]: I0130 18:08:38.147009 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m77k" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" containerID="cri-o://4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" gracePeriod=2 Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046242 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046720 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.046793 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.047852 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:08:39 crc kubenswrapper[4766]: I0130 18:08:39.047965 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" gracePeriod=600 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.166287 4766 generic.go:334] "Generic (PLEG): container finished" podID="452703b6-c53d-4432-8d58-cbdf354b0887" containerID="4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" exitCode=0 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.166430 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8"} Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170169 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" exitCode=0 Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170246 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a"} Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.170290 4766 scope.go:117] "RemoveContainer" containerID="14f71626d75ef20c57062d292513e3fd82c4a368099315d09ba80457172d5098" Jan 30 18:08:40 crc kubenswrapper[4766]: E0130 18:08:40.272751 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.503465 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.615902 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616023 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616209 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") pod \"452703b6-c53d-4432-8d58-cbdf354b0887\" (UID: \"452703b6-c53d-4432-8d58-cbdf354b0887\") " Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.616989 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities" (OuterVolumeSpecName: "utilities") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.622377 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg" (OuterVolumeSpecName: "kube-api-access-k8hsg") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "kube-api-access-k8hsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.640489 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "452703b6-c53d-4432-8d58-cbdf354b0887" (UID: "452703b6-c53d-4432-8d58-cbdf354b0887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.718852 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8hsg\" (UniqueName: \"kubernetes.io/projected/452703b6-c53d-4432-8d58-cbdf354b0887-kube-api-access-k8hsg\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.719159 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:40 crc kubenswrapper[4766]: I0130 18:08:40.719259 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/452703b6-c53d-4432-8d58-cbdf354b0887-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.181823 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:08:41 crc kubenswrapper[4766]: E0130 18:08:41.182129 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.183959 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m77k" event={"ID":"452703b6-c53d-4432-8d58-cbdf354b0887","Type":"ContainerDied","Data":"9f20e475f5a6055842a49abfae865ce70cc486c8717f74afea1dbb07ab14232e"} Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.184074 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m77k" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.184090 4766 scope.go:117] "RemoveContainer" containerID="4c7916779ac8431f2061a8acd0631dac027febf2f4d0ecf027e5eb9495c40fb8" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.219882 4766 scope.go:117] "RemoveContainer" containerID="4bb4bd623a0d7d96f86b5d8208aee11cda4d37f8b9675df7dc40c16263269f2c" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.225267 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.234476 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m77k"] Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.257314 4766 scope.go:117] "RemoveContainer" containerID="0143093a358ae22f4f76b820ec38f6643f0639577c55be627fe18675ef719623" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.728935 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:41 crc kubenswrapper[4766]: I0130 18:08:41.811467 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:42 crc kubenswrapper[4766]: I0130 18:08:42.052995 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" path="/var/lib/kubelet/pods/452703b6-c53d-4432-8d58-cbdf354b0887/volumes" Jan 30 18:08:42 crc kubenswrapper[4766]: I0130 18:08:42.742647 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.214724 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fvqb" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" containerID="cri-o://7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" gracePeriod=2 Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.695285 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782593 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.782811 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") pod \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\" (UID: \"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a\") " Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.783925 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities" (OuterVolumeSpecName: "utilities") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.788076 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2" (OuterVolumeSpecName: "kube-api-access-dclp2") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "kube-api-access-dclp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.885555 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.885762 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclp2\" (UniqueName: \"kubernetes.io/projected/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-kube-api-access-dclp2\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.893837 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" (UID: "5661aa7a-4ca4-43a3-8a14-32ba85ecd02a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:08:43 crc kubenswrapper[4766]: I0130 18:08:43.987330 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234675 4766 generic.go:334] "Generic (PLEG): container finished" podID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" exitCode=0 Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234785 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234837 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fvqb" event={"ID":"5661aa7a-4ca4-43a3-8a14-32ba85ecd02a","Type":"ContainerDied","Data":"adbc220b2deb8c6b2c23f688ed49bbfcc93a05709363d598129b754f45c43c1c"} Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.234871 4766 scope.go:117] "RemoveContainer" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.235215 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fvqb" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.266305 4766 scope.go:117] "RemoveContainer" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.272994 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.282940 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fvqb"] Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.299357 4766 scope.go:117] "RemoveContainer" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338432 4766 scope.go:117] "RemoveContainer" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.338886 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": container with ID starting with 7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218 not found: ID does not exist" containerID="7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338933 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218"} err="failed to get container status \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": rpc error: code = NotFound desc = could not find container \"7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218\": container with ID starting with 7bc76810479052680c6e9b88c4cc0d915275925573caace53f4dd52943301218 not found: ID does not exist" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.338956 4766 scope.go:117] "RemoveContainer" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.339359 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": container with ID starting with a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f not found: ID does not exist" containerID="a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339386 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f"} err="failed to get container status \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": rpc error: code = NotFound desc = could not find container \"a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f\": container with ID starting with a86fa1772cbf6c37eba8bc0e141542ddbc7b252719137b45b7c04e5f670abb7f not found: ID does not exist" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339403 4766 scope.go:117] "RemoveContainer" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: E0130 18:08:44.339670 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": container with ID starting with 6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2 not found: ID does not exist" containerID="6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2" Jan 30 18:08:44 crc kubenswrapper[4766]: I0130 18:08:44.339698 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2"} err="failed to get container status \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": rpc error: code = NotFound desc = could not find container \"6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2\": container with ID starting with 6d2d86bf651bcceb29e8c4c66da842c27e540e29df11a4d2ed626b77b27807f2 not found: ID does not exist" Jan 30 18:08:46 crc kubenswrapper[4766]: I0130 18:08:46.058955 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" path="/var/lib/kubelet/pods/5661aa7a-4ca4-43a3-8a14-32ba85ecd02a/volumes" Jan 30 18:08:55 crc kubenswrapper[4766]: I0130 18:08:55.039497 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:08:55 crc kubenswrapper[4766]: E0130 18:08:55.040269 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:10 crc kubenswrapper[4766]: I0130 18:09:10.041429 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:10 crc kubenswrapper[4766]: E0130 18:09:10.042199 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:12 crc kubenswrapper[4766]: I0130 18:09:12.075565 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 18:09:12 crc kubenswrapper[4766]: I0130 18:09:12.075870 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-8nm42"] Jan 30 18:09:14 crc kubenswrapper[4766]: I0130 18:09:14.051391 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5031f6-51af-4f63-8bc4-4a518f58ddd4" path="/var/lib/kubelet/pods/fd5031f6-51af-4f63-8bc4-4a518f58ddd4/volumes" Jan 30 18:09:21 crc kubenswrapper[4766]: I0130 18:09:21.039447 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:21 crc kubenswrapper[4766]: E0130 18:09:21.040255 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.302557 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303497 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303519 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303547 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303555 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303578 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303588 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303613 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303621 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303640 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303648 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="extract-utilities" Jan 30 18:09:22 crc kubenswrapper[4766]: E0130 18:09:22.303670 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303678 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="extract-content" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303939 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5661aa7a-4ca4-43a3-8a14-32ba85ecd02a" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.303967 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="452703b6-c53d-4432-8d58-cbdf354b0887" containerName="registry-server" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.305910 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.328339 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431035 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431363 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.431533 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.533996 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534102 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534131 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.534894 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.535007 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.558664 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"certified-operators-dfxhv\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:22 crc kubenswrapper[4766]: I0130 18:09:22.666476 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.175219 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595206 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" exitCode=0 Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595299 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16"} Jan 30 18:09:23 crc kubenswrapper[4766]: I0130 18:09:23.595551 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"aa2967831de8a967e1d47e44ca024dc8293456b0c2c5eff8b5ff4b43f600fab6"} Jan 30 18:09:24 crc kubenswrapper[4766]: I0130 18:09:24.607912 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} Jan 30 18:09:26 crc kubenswrapper[4766]: I0130 18:09:26.626082 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" exitCode=0 Jan 30 18:09:26 crc kubenswrapper[4766]: I0130 18:09:26.626147 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} Jan 30 18:09:27 crc kubenswrapper[4766]: I0130 18:09:27.637213 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerStarted","Data":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} Jan 30 18:09:27 crc kubenswrapper[4766]: I0130 18:09:27.664068 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfxhv" podStartSLOduration=2.198170092 podStartE2EDuration="5.664047841s" podCreationTimestamp="2026-01-30 18:09:22 +0000 UTC" firstStartedPulling="2026-01-30 18:09:23.596999316 +0000 UTC m=+6418.234956662" lastFinishedPulling="2026-01-30 18:09:27.062877075 +0000 UTC m=+6421.700834411" observedRunningTime="2026-01-30 18:09:27.656152756 +0000 UTC m=+6422.294110102" watchObservedRunningTime="2026-01-30 18:09:27.664047841 +0000 UTC m=+6422.302005187" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.666912 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.667532 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.712548 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.772864 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:32 crc kubenswrapper[4766]: I0130 18:09:32.958330 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:34 crc kubenswrapper[4766]: I0130 18:09:34.723796 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfxhv" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" containerID="cri-o://7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" gracePeriod=2 Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.433762 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.529990 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.530490 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.530552 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") pod \"ef7abb63-975d-41fe-9e07-406bd855526f\" (UID: \"ef7abb63-975d-41fe-9e07-406bd855526f\") " Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.531265 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities" (OuterVolumeSpecName: "utilities") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.539027 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs" (OuterVolumeSpecName: "kube-api-access-tqlbs") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "kube-api-access-tqlbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.551243 4766 scope.go:117] "RemoveContainer" containerID="ac71d8e70f653ebbdd2675504fd0957f83245a57664fca40a163d39e26aa650a" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.634330 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.634392 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlbs\" (UniqueName: \"kubernetes.io/projected/ef7abb63-975d-41fe-9e07-406bd855526f-kube-api-access-tqlbs\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.664792 4766 scope.go:117] "RemoveContainer" containerID="31c8a3d4fa3c5871f82c77326d881824b1b083a480b009f1be2bb206710bb303" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737243 4766 generic.go:334] "Generic (PLEG): container finished" podID="ef7abb63-975d-41fe-9e07-406bd855526f" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" exitCode=0 Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737316 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737353 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfxhv" event={"ID":"ef7abb63-975d-41fe-9e07-406bd855526f","Type":"ContainerDied","Data":"aa2967831de8a967e1d47e44ca024dc8293456b0c2c5eff8b5ff4b43f600fab6"} Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737377 4766 scope.go:117] "RemoveContainer" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.737515 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfxhv" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.750834 4766 scope.go:117] "RemoveContainer" containerID="1156fa8967f6790101764cbd5a85756c89530dcced500933e43bdf4774cc947c" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.788626 4766 scope.go:117] "RemoveContainer" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.810585 4766 scope.go:117] "RemoveContainer" containerID="b39ea84d36ef42f8927d7576b9afa12181f150184fa9861bc236ee65bcdde03a" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.832300 4766 scope.go:117] "RemoveContainer" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.863783 4766 scope.go:117] "RemoveContainer" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.864253 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": container with ID starting with 7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de not found: ID does not exist" containerID="7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864349 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de"} err="failed to get container status \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": rpc error: code = NotFound desc = could not find container \"7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de\": container with ID starting with 7ed2e4a8f77301f74dbe8fcf94da96d12de751e354f6b87901da9ed53cd936de not found: ID does not exist" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864429 4766 scope.go:117] "RemoveContainer" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.864725 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": container with ID starting with 95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5 not found: ID does not exist" containerID="95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864774 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5"} err="failed to get container status \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": rpc error: code = NotFound desc = could not find container \"95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5\": container with ID starting with 95b62283b88f0b95f6d203bf6faf01256d49e5261ebe9d4c205cf140e428d8f5 not found: ID does not exist" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.864807 4766 scope.go:117] "RemoveContainer" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: E0130 18:09:35.865072 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": container with ID starting with 67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16 not found: ID does not exist" containerID="67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16" Jan 30 18:09:35 crc kubenswrapper[4766]: I0130 18:09:35.865161 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16"} err="failed to get container status \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": rpc error: code = NotFound desc = could not find container \"67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16\": container with ID starting with 67a59d5114cfec40ecce26e63d77ac1f822597430c59c816a6c5488929d95d16 not found: ID does not exist" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.046293 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:36 crc kubenswrapper[4766]: E0130 18:09:36.046621 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.383349 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef7abb63-975d-41fe-9e07-406bd855526f" (UID: "ef7abb63-975d-41fe-9e07-406bd855526f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.453430 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef7abb63-975d-41fe-9e07-406bd855526f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.671555 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:36 crc kubenswrapper[4766]: I0130 18:09:36.684050 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfxhv"] Jan 30 18:09:38 crc kubenswrapper[4766]: I0130 18:09:38.053655 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" path="/var/lib/kubelet/pods/ef7abb63-975d-41fe-9e07-406bd855526f/volumes" Jan 30 18:09:51 crc kubenswrapper[4766]: I0130 18:09:51.039844 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:09:51 crc kubenswrapper[4766]: E0130 18:09:51.040555 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:03 crc kubenswrapper[4766]: I0130 18:10:03.040432 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:03 crc kubenswrapper[4766]: E0130 18:10:03.041530 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:14 crc kubenswrapper[4766]: I0130 18:10:14.039003 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:14 crc kubenswrapper[4766]: E0130 18:10:14.039684 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:25 crc kubenswrapper[4766]: I0130 18:10:25.039412 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:25 crc kubenswrapper[4766]: E0130 18:10:25.040237 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.815985 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817014 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817028 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817043 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-utilities" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817049 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-utilities" Jan 30 18:10:29 crc kubenswrapper[4766]: E0130 18:10:29.817064 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-content" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817069 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="extract-content" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.817266 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7abb63-975d-41fe-9e07-406bd855526f" containerName="registry-server" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.818354 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.820950 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzpss"/"kube-root-ca.crt" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.821207 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzpss"/"openshift-service-ca.crt" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.821425 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vzpss"/"default-dockercfg-rd7z6" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.827517 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.909664 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:29 crc kubenswrapper[4766]: I0130 18:10:29.909752 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.011947 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.012740 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.012895 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.031371 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"must-gather-w799p\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.147286 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:10:30 crc kubenswrapper[4766]: I0130 18:10:30.687622 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:10:31 crc kubenswrapper[4766]: I0130 18:10:31.249206 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"52a83fa3d0421a4c02b1382ecdde5f2c954b6d5a37559a41d3ebe5dfe743483d"} Jan 30 18:10:35 crc kubenswrapper[4766]: I0130 18:10:35.308148 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8"} Jan 30 18:10:36 crc kubenswrapper[4766]: I0130 18:10:36.317798 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerStarted","Data":"fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401"} Jan 30 18:10:36 crc kubenswrapper[4766]: I0130 18:10:36.340770 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzpss/must-gather-w799p" podStartSLOduration=3.183107659 podStartE2EDuration="7.340750177s" podCreationTimestamp="2026-01-30 18:10:29 +0000 UTC" firstStartedPulling="2026-01-30 18:10:30.690101571 +0000 UTC m=+6485.328058917" lastFinishedPulling="2026-01-30 18:10:34.847744089 +0000 UTC m=+6489.485701435" observedRunningTime="2026-01-30 18:10:36.340289995 +0000 UTC m=+6490.978247351" watchObservedRunningTime="2026-01-30 18:10:36.340750177 +0000 UTC m=+6490.978707513" Jan 30 18:10:38 crc kubenswrapper[4766]: I0130 18:10:38.039969 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:38 crc kubenswrapper[4766]: E0130 18:10:38.040934 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.288883 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.291517 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.442982 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.443654 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546299 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546412 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.546514 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.566364 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"crc-debug-4k4tt\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: I0130 18:10:40.615163 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:10:40 crc kubenswrapper[4766]: W0130 18:10:40.658523 4766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb173c41_5a03_41e9_9607_2af3fadd2bb0.slice/crio-573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb WatchSource:0}: Error finding container 573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb: Status 404 returned error can't find the container with id 573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb Jan 30 18:10:41 crc kubenswrapper[4766]: I0130 18:10:41.368625 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerStarted","Data":"573a6b91d725472fdfb205c55e448cc38aa6231b4ffbf6de976e5eaeaf9078bb"} Jan 30 18:10:52 crc kubenswrapper[4766]: I0130 18:10:52.039823 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:10:52 crc kubenswrapper[4766]: E0130 18:10:52.067071 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:10:53 crc kubenswrapper[4766]: I0130 18:10:53.497272 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerStarted","Data":"549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543"} Jan 30 18:10:53 crc kubenswrapper[4766]: I0130 18:10:53.524080 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" podStartSLOduration=0.959054018 podStartE2EDuration="13.524054538s" podCreationTimestamp="2026-01-30 18:10:40 +0000 UTC" firstStartedPulling="2026-01-30 18:10:40.66211199 +0000 UTC m=+6495.300069346" lastFinishedPulling="2026-01-30 18:10:53.22711252 +0000 UTC m=+6507.865069866" observedRunningTime="2026-01-30 18:10:53.514292372 +0000 UTC m=+6508.152249718" watchObservedRunningTime="2026-01-30 18:10:53.524054538 +0000 UTC m=+6508.162011884" Jan 30 18:11:07 crc kubenswrapper[4766]: I0130 18:11:07.040390 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:07 crc kubenswrapper[4766]: E0130 18:11:07.041309 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:16 crc kubenswrapper[4766]: I0130 18:11:16.701346 4766 generic.go:334] "Generic (PLEG): container finished" podID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerID="549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543" exitCode=0 Jan 30 18:11:16 crc kubenswrapper[4766]: I0130 18:11:16.701415 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" event={"ID":"bb173c41-5a03-41e9-9607-2af3fadd2bb0","Type":"ContainerDied","Data":"549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543"} Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.841113 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.874732 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.883263 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-4k4tt"] Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.931750 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") pod \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.931902 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host" (OuterVolumeSpecName: "host") pod "bb173c41-5a03-41e9-9607-2af3fadd2bb0" (UID: "bb173c41-5a03-41e9-9607-2af3fadd2bb0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.932253 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") pod \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\" (UID: \"bb173c41-5a03-41e9-9607-2af3fadd2bb0\") " Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.932934 4766 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb173c41-5a03-41e9-9607-2af3fadd2bb0-host\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:17 crc kubenswrapper[4766]: I0130 18:11:17.939796 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx" (OuterVolumeSpecName: "kube-api-access-swlmx") pod "bb173c41-5a03-41e9-9607-2af3fadd2bb0" (UID: "bb173c41-5a03-41e9-9607-2af3fadd2bb0"). InnerVolumeSpecName "kube-api-access-swlmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.035072 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swlmx\" (UniqueName: \"kubernetes.io/projected/bb173c41-5a03-41e9-9607-2af3fadd2bb0-kube-api-access-swlmx\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.051116 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" path="/var/lib/kubelet/pods/bb173c41-5a03-41e9-9607-2af3fadd2bb0/volumes" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.720589 4766 scope.go:117] "RemoveContainer" containerID="549166e86899f93f0e412300b625a0698a6ead854b19e39901ccb892da798543" Jan 30 18:11:18 crc kubenswrapper[4766]: I0130 18:11:18.720643 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-4k4tt" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.039385 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:19 crc kubenswrapper[4766]: E0130 18:11:19.040019 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.067974 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:19 crc kubenswrapper[4766]: E0130 18:11:19.068539 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.068566 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.068803 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb173c41-5a03-41e9-9607-2af3fadd2bb0" containerName="container-00" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.069651 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.153310 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.153871 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255626 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255701 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.255897 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.278960 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"crc-debug-sfqcx\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.386750 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.736800 4766 generic.go:334] "Generic (PLEG): container finished" podID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerID="4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d" exitCode=1 Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.736896 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" event={"ID":"e793baf0-20e5-4275-b2ca-28cc4203be80","Type":"ContainerDied","Data":"4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d"} Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.737453 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" event={"ID":"e793baf0-20e5-4275-b2ca-28cc4203be80","Type":"ContainerStarted","Data":"89dbc8d740b089d80c12d950faa56c545d4bde43689cb20a7ab9dbb853db3b1d"} Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.771997 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:19 crc kubenswrapper[4766]: I0130 18:11:19.780701 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/crc-debug-sfqcx"] Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.870279 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.992965 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") pod \"e793baf0-20e5-4275-b2ca-28cc4203be80\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.993057 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") pod \"e793baf0-20e5-4275-b2ca-28cc4203be80\" (UID: \"e793baf0-20e5-4275-b2ca-28cc4203be80\") " Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.993437 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host" (OuterVolumeSpecName: "host") pod "e793baf0-20e5-4275-b2ca-28cc4203be80" (UID: "e793baf0-20e5-4275-b2ca-28cc4203be80"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.994164 4766 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e793baf0-20e5-4275-b2ca-28cc4203be80-host\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:20 crc kubenswrapper[4766]: I0130 18:11:20.998444 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf" (OuterVolumeSpecName: "kube-api-access-nckkf") pod "e793baf0-20e5-4275-b2ca-28cc4203be80" (UID: "e793baf0-20e5-4275-b2ca-28cc4203be80"). InnerVolumeSpecName "kube-api-access-nckkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.096269 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nckkf\" (UniqueName: \"kubernetes.io/projected/e793baf0-20e5-4275-b2ca-28cc4203be80-kube-api-access-nckkf\") on node \"crc\" DevicePath \"\"" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.770340 4766 scope.go:117] "RemoveContainer" containerID="4e748bfeb8a44757dba03123d857374e782525241c642923def478bde4fb254d" Jan 30 18:11:21 crc kubenswrapper[4766]: I0130 18:11:21.770554 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/crc-debug-sfqcx" Jan 30 18:11:22 crc kubenswrapper[4766]: I0130 18:11:22.052007 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" path="/var/lib/kubelet/pods/e793baf0-20e5-4275-b2ca-28cc4203be80/volumes" Jan 30 18:11:30 crc kubenswrapper[4766]: I0130 18:11:30.039525 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:30 crc kubenswrapper[4766]: E0130 18:11:30.040300 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:41 crc kubenswrapper[4766]: I0130 18:11:41.039655 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:41 crc kubenswrapper[4766]: E0130 18:11:41.041515 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.047765 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.060701 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3460-account-create-update-759zj"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.069734 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:11:49 crc kubenswrapper[4766]: I0130 18:11:49.076993 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-qr4v8"] Jan 30 18:11:50 crc kubenswrapper[4766]: I0130 18:11:50.052180 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac9189d-ff73-4cd5-8299-276858527c74" path="/var/lib/kubelet/pods/8ac9189d-ff73-4cd5-8299-276858527c74/volumes" Jan 30 18:11:50 crc kubenswrapper[4766]: I0130 18:11:50.053650 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43513bc-2d21-47b3-8acb-b331c5f5f46f" path="/var/lib/kubelet/pods/f43513bc-2d21-47b3-8acb-b331c5f5f46f/volumes" Jan 30 18:11:56 crc kubenswrapper[4766]: I0130 18:11:56.045515 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:11:56 crc kubenswrapper[4766]: E0130 18:11:56.046417 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.274328 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/init-config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.522693 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/init-config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.524311 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/alertmanager/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.602963 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_9044d49e-1762-437b-86a3-8697b46a1930/config-reloader/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.719052 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b7b4f6b66-crqxp_c0607eb3-be12-4282-ac48-55b5220b4888/barbican-api/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.751643 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b7b4f6b66-crqxp_c0607eb3-be12-4282-ac48-55b5220b4888/barbican-api-log/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.897314 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78445c974-66754_a6132938-2052-4889-b1d7-2e43deb664e1/barbican-keystone-listener/0.log" Jan 30 18:12:00 crc kubenswrapper[4766]: I0130 18:12:00.919274 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78445c974-66754_a6132938-2052-4889-b1d7-2e43deb664e1/barbican-keystone-listener-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.023892 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84dcf975b7-fj984_eb8f2fee-863e-4c1e-90af-6ed7a631a4ac/barbican-worker/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.097519 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84dcf975b7-fj984_eb8f2fee-863e-4c1e-90af-6ed7a631a4ac/barbican-worker-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.215399 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/ceilometer-central-agent/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.279513 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/ceilometer-notification-agent/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.313975 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/proxy-httpd/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.366926 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464bbfb2-a15f-4b08-85d1-bc0fe536c6d7/sg-core/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.504652 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9a81891-2796-4952-bf9e-9a9f83668e34/cinder-api-log/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.528031 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9a81891-2796-4952-bf9e-9a9f83668e34/cinder-api/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.726450 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1a4ab9dd-be94-4701-a0ba-55dde27e9543/probe/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.804706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1a4ab9dd-be94-4701-a0ba-55dde27e9543/cinder-backup/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.865986 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_598edf34-3970-416e-b9fb-4de69de61ca1/cinder-scheduler/0.log" Jan 30 18:12:01 crc kubenswrapper[4766]: I0130 18:12:01.945535 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_598edf34-3970-416e-b9fb-4de69de61ca1/probe/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.027502 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf1121a2-7545-40c9-9280-9337e94554d9/cinder-volume/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.037278 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.064822 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-276pq"] Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.093835 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_cf1121a2-7545-40c9-9280-9337e94554d9/probe/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.245893 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/init/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.390894 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/init/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.422065 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8687c8cf7-7zxrr_c2333655-ed62-419c-a0cc-04a4c9f36938/dnsmasq-dns/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.425615 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ddc1af26-668d-4715-b17a-e94ee4f5b571/glance-httpd/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.603963 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ddc1af26-668d-4715-b17a-e94ee4f5b571/glance-log/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.629740 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c25f82b3-9296-4814-92b1-59ca5c2bf2a0/glance-httpd/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.699861 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c25f82b3-9296-4814-92b1-59ca5c2bf2a0/glance-log/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.865337 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-675bcfc5ff-kvdtq_e11fd011-1725-4cdd-979f-75eecd0329b2/heat-api/0.log" Jan 30 18:12:02 crc kubenswrapper[4766]: I0130 18:12:02.930335 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-675bf5dcf-ltj5r_65f44ca0-52f4-4d4a-aeb8-18275fff50eb/heat-cfnapi/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.122816 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-54c46d7b9c-z94n2_364a6690-a249-4765-b86e-b72ca919edb8/heat-engine/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.403031 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-757f4f657-jzgr8_f7b06d45-03c9-406f-8fc0-79428ec9de8f/horizon/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.500067 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-757f4f657-jzgr8_f7b06d45-03c9-406f-8fc0-79428ec9de8f/horizon-log/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.547662 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496601-pl6qc_5d20810a-2efe-43c6-a8e6-92a14834a048/keystone-cron/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.776686 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_899280ca-43e9-46f7-8204-a90e682a0656/kube-state-metrics/0.log" Jan 30 18:12:03 crc kubenswrapper[4766]: I0130 18:12:03.803847 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d9bc78c74-tqx5h_d2175d86-a673-4c75-9344-d410bff4770a/keystone-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.001380 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_d76c2935-d3e2-401f-bdd0-878e885a5add/adoption/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.051619 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bc6794-04be-40f4-8fa7-552f45a104c0" path="/var/lib/kubelet/pods/05bc6794-04be-40f4-8fa7-552f45a104c0/volumes" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.324480 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-577cfcb8f7-k7t7l_f8fd7445-369a-43d1-8b68-6a3d7b2abbe3/neutron-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.375136 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-577cfcb8f7-k7t7l_f8fd7445-369a-43d1-8b68-6a3d7b2abbe3/neutron-httpd/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.576208 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af618003-f485-4daa-bedb-d1408b4547bb/nova-api-api/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.734393 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_af618003-f485-4daa-bedb-d1408b4547bb/nova-api-log/0.log" Jan 30 18:12:04 crc kubenswrapper[4766]: I0130 18:12:04.860213 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_463fa20b-ef02-4b0a-ae8e-3fed6dc02c37/nova-cell0-conductor-conductor/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.022292 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b4061a48-dd7c-4b2f-aa8b-422eb8f65c1e/nova-cell1-conductor-conductor/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.236499 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5d4aa9c5-4f42-495a-921f-986b170dafe4/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.291336 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_374fa21e-428d-4383-9124-5272df0552d4/nova-metadata-metadata/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.311119 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_374fa21e-428d-4383-9124-5272df0552d4/nova-metadata-log/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.526799 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_782b2122-c6f0-424d-85b1-efb911f37e20/nova-scheduler-scheduler/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.582372 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/init/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.825440 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/octavia-api-provider-agent/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.853086 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/init/0.log" Jan 30 18:12:05 crc kubenswrapper[4766]: I0130 18:12:05.992750 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5c95b64c75-5mhgs_0eb984d4-df63-4a4e-b808-e30c97f6f606/octavia-api/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.047706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.234368 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/octavia-healthmanager/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.293538 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-422fs_1c79d934-7880-4883-bee6-c60ea7745616/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.400658 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.621907 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.663577 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.721691 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f25c5_7ab0aca6-c8b4-4f8b-a74e-1fc6b7aa433a/octavia-housekeeping/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.854310 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/octavia-amphora-httpd/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.945004 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-b9qv6_a2dd03c7-c095-4563-9107-802624d1e4f5/init/0.log" Jan 30 18:12:06 crc kubenswrapper[4766]: I0130 18:12:06.992702 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.355412 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.415953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-l7mdv_37d87bf7-0bd7-4201-b0e3-0d1b8062c930/octavia-rsyslog/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.483146 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.699335 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/init/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.801057 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/mysql-bootstrap/0.log" Jan 30 18:12:07 crc kubenswrapper[4766]: I0130 18:12:07.842780 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-qrfbg_5aade569-1bea-4133-8ea3-51cea870143d/octavia-worker/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.015548 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.025520 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7c586850-0ed6-4949-9087-0e66405455ce/galera/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.144556 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.321676 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/mysql-bootstrap/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.429997 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e546c4-803f-4379-b5fb-de5ec7f0c79f/galera/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.441452 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1f134cd2-6d22-47cd-9ef6-bfdda2701067/openstackclient/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.666710 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-k9frg_8d8369af-eac5-4d31-b183-1a542da452c5/ovn-controller/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.770954 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8hgh6_1a4fbcc6-ea61-45d4-b3c4-ecaf44f460c5/openstack-network-exporter/0.log" Jan 30 18:12:08 crc kubenswrapper[4766]: I0130 18:12:08.989334 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server-init/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.217907 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server-init/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.226936 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovsdb-server/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.228663 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4vlg_aa514cb2-1f05-42a6-a181-f4f62250bd7c/ovs-vswitchd/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.463125 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_7fb6354d-977f-494f-9a51-0a1b8f48c686/adoption/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.514161 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9743ed16-7558-435e-9f72-3688bd1102d7/ovn-northd/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.539534 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9743ed16-7558-435e-9f72-3688bd1102d7/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.766553 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b16a682c-8a11-4113-82e8-b361a1d8881e/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.781601 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b16a682c-8a11-4113-82e8-b361a1d8881e/ovsdbserver-nb/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.955844 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_b29c551b-31dd-4264-b3f0-04fde1a2529f/openstack-network-exporter/0.log" Jan 30 18:12:09 crc kubenswrapper[4766]: I0130 18:12:09.994924 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_b29c551b-31dd-4264-b3f0-04fde1a2529f/ovsdbserver-nb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.057764 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2591e329-01bd-4573-8590-6e3f62bfb187/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.170695 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_2591e329-01bd-4573-8590-6e3f62bfb187/ovsdbserver-nb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.245477 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1053f18b-60a9-44c8-84f5-77bc506a83c1/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.340369 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1053f18b-60a9-44c8-84f5-77bc506a83c1/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.413750 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_08baa9d0-2942-4a73-a75a-d13dc2148bb0/memcached/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.496743 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_95b4e121-951b-4c45-a227-1ec8638a2320/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.550961 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_95b4e121-951b-4c45-a227-1ec8638a2320/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.579230 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_76df5ae8-0eeb-4bb5-86ee-1c416397a186/openstack-network-exporter/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.687410 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_76df5ae8-0eeb-4bb5-86ee-1c416397a186/ovsdbserver-sb/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.760683 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cf79c7456-bp9jt_234231ef-1ed0-40ff-a4a8-0d9f533d39de/placement-api/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.807223 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6cf79c7456-bp9jt_234231ef-1ed0-40ff-a4a8-0d9f533d39de/placement-log/0.log" Jan 30 18:12:10 crc kubenswrapper[4766]: I0130 18:12:10.903509 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/init-config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.040492 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:11 crc kubenswrapper[4766]: E0130 18:12:11.041228 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.060490 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/prometheus/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.068665 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/init-config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.092570 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/config-reloader/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.106241 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_23ec4e7c-3732-4892-897e-5b2a5e7c2577/thanos-sidecar/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.229587 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.423384 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.467008 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd0348d-2f44-4961-9503-eb8ce09016d8/rabbitmq/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.508985 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.659694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/setup-container/0.log" Jan 30 18:12:11 crc kubenswrapper[4766]: I0130 18:12:11.669614 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b579b360-d367-4637-8bf4-24be247f4daf/rabbitmq/0.log" Jan 30 18:12:22 crc kubenswrapper[4766]: I0130 18:12:22.039250 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:22 crc kubenswrapper[4766]: E0130 18:12:22.040015 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.531110 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-ssl7s_46a7c725-b480-4f85-91d0-24831e713b26/manager/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.609853 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.757120 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.840461 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.857547 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:31 crc kubenswrapper[4766]: I0130 18:12:31.987861 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/util/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.006484 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/extract/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.016694 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ca63afb9816c27b78326d2df487586527ea2053f3a00905fba8657476074mkv_cbc79777-d574-4d18-953a-6d51b5c2bd84/pull/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.234410 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-rjgtk_c610cc53-6813-4c5b-86e9-b421aaa21666/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.273922 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-mlkcx_72b84e1c-8ed8-4fae-8dff-ca2576579904/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.503142 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-8hrwp_2a5fe995-2904-4751-ae74-958efaa8596a/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.555516 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6bfc9d4d48-7287m_d34f90ce-9c03-441f-85cb-67b1666672fc/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.675684 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-lhxhc_be908bdc-d0b5-4409-b088-b9b51de3cfb0/manager/0.log" Jan 30 18:12:32 crc kubenswrapper[4766]: I0130 18:12:32.886827 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-jhbv7_16fd0d31-da4c-4c6b-bbc4-8302daee3ee5/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.119706 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-xkfn6_b0db2f42-5872-4cac-9ee0-5990c49e0a26/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.222110 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-l4pbc_0974b654-1fc0-4d97-9be3-eca153de4c57/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.281541 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ddthn_09fcb126-016c-4b79-91d5-90e98e3da7f3/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.447140 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-jzztd_1ea9d2ea-ca11-428c-ab61-28bf391bcd4f/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.558882 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-kkvlj_d4c39f8d-f83d-4311-bb99-24dfa7eaeafd/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.839635 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-6jc7f_0582a100-4b50-452f-baca-e67b4d6f2891/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.844559 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-swq4p_a5c180e1-f6bc-49b7-b4cf-4812d0bba5ac/manager/0.log" Jan 30 18:12:33 crc kubenswrapper[4766]: I0130 18:12:33.924924 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlmw6r_90a2893c-9d38-4d53-93d9-a50421172933/manager/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.183231 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c7c85d9bc-85t58_e1df6663-4a1f-4900-8eba-215a6f08beb0/operator/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.394787 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dpb9n_502b8426-9711-4e00-b59f-743352003f2b/registry-server/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.720221 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-2jmqd_8ce68dfd-dfb4-42b9-8fc1-0da8b8336b90/manager/0.log" Jan 30 18:12:34 crc kubenswrapper[4766]: I0130 18:12:34.760527 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-bm24k_04cf0394-fb7b-41a9-a9bb-6fec8537d393/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.002601 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-49xwp_dc1c52ba-db5b-40ac-87da-de36346e8491/operator/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.039160 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:35 crc kubenswrapper[4766]: E0130 18:12:35.039435 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.472560 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-566d8d7445-l44w4_5eacef6b-7362-4c43-912a-eb3e6ccce6e9/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.670773 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-69484b8d9d-tqxks_0c603c94-f0b0-4820-a5a1-0ab9a76ceb49/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.671953 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-d7xxm_c03d46f4-f454-4b31-b4c7-5c324390d8ec/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.834783 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86bf68df65-m95g8_b0a6f6d6-6e33-4f4c-a0e4-cff7d180eb6f/manager/0.log" Jan 30 18:12:35 crc kubenswrapper[4766]: I0130 18:12:35.858399 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-dklb4_55fb4fd9-f80b-474b-b9c9-758720536349/manager/0.log" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.021685 4766 scope.go:117] "RemoveContainer" containerID="2372c1e9832f7c23aa19961a5061d572b88f3ebb7135f0f0dc1ca6e4cc7f3513" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.048398 4766 scope.go:117] "RemoveContainer" containerID="fdc597711293e561af5e386d2cc4ab829c74c387f45fbdb64b6eb6843ce500c5" Jan 30 18:12:36 crc kubenswrapper[4766]: I0130 18:12:36.098578 4766 scope.go:117] "RemoveContainer" containerID="2284a65079c4717b672db4a45e6787bcf5bd83c7d786d4d7da7725c5a83bc169" Jan 30 18:12:46 crc kubenswrapper[4766]: I0130 18:12:46.052260 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:46 crc kubenswrapper[4766]: E0130 18:12:46.053869 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.043506 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-28vp9_bb325f25-00bb-4519-99d5-94ea7bbcd9d5/control-plane-machine-set-operator/0.log" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.271676 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jn8dp_8acca84e-2800-4a20-b3e8-84e021d1c001/kube-rbac-proxy/0.log" Jan 30 18:12:54 crc kubenswrapper[4766]: I0130 18:12:54.326130 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jn8dp_8acca84e-2800-4a20-b3e8-84e021d1c001/machine-api-operator/0.log" Jan 30 18:12:57 crc kubenswrapper[4766]: I0130 18:12:57.040371 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:12:57 crc kubenswrapper[4766]: E0130 18:12:57.040973 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.664755 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-9lmrd_d635eb48-c2c9-404e-9ffb-c8385134670b/cert-manager-controller/0.log" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.835524 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-ltbxj_92fa5747-17c3-4b1c-a66a-e8b0a1d6f622/cert-manager-webhook/0.log" Jan 30 18:13:05 crc kubenswrapper[4766]: I0130 18:13:05.905352 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qr6lx_b1682925-c14f-425a-b072-535a37cdca48/cert-manager-cainjector/0.log" Jan 30 18:13:08 crc kubenswrapper[4766]: I0130 18:13:08.040353 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:08 crc kubenswrapper[4766]: E0130 18:13:08.041032 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.289251 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:16 crc kubenswrapper[4766]: E0130 18:13:16.295487 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.295528 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.295754 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e793baf0-20e5-4275-b2ca-28cc4203be80" containerName="container-00" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.297470 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.307099 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515072 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515261 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.515545 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617498 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617623 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617667 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.617985 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.618064 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.643017 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"community-operators-hbhzj\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:16 crc kubenswrapper[4766]: I0130 18:13:16.741680 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.345220 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.744458 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-d2p2z_d30ca6b4-bd87-4d25-92dd-f3d94410f2a3/nmstate-console-plugin/0.log" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.963211 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-82wxr_121c0166-75c7-4f39-a07b-c89cb81d2fd8/nmstate-handler/0.log" Jan 30 18:13:17 crc kubenswrapper[4766]: I0130 18:13:17.989283 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wv52c_46ac0f62-2413-4258-a957-35039942d0f7/kube-rbac-proxy/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004557 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" exitCode=0 Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004603 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34"} Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.004646 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"d556a6d4ba3b27fc2742e8c095741dd6d4af9660f74c660ee0fa4ba9a2509a03"} Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.007124 4766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.045072 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wv52c_46ac0f62-2413-4258-a957-35039942d0f7/nmstate-metrics/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.176489 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v6mpm_463d1450-7318-4003-b30d-82dc9e1bec53/nmstate-operator/0.log" Jan 30 18:13:18 crc kubenswrapper[4766]: I0130 18:13:18.226509 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-zj7fb_ed7e34e5-c04e-4852-b4a3-9e28fd5f960d/nmstate-webhook/0.log" Jan 30 18:13:19 crc kubenswrapper[4766]: I0130 18:13:19.040092 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:19 crc kubenswrapper[4766]: E0130 18:13:19.040473 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:20 crc kubenswrapper[4766]: I0130 18:13:20.025329 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} Jan 30 18:13:22 crc kubenswrapper[4766]: I0130 18:13:22.048656 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" exitCode=0 Jan 30 18:13:22 crc kubenswrapper[4766]: I0130 18:13:22.050942 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} Jan 30 18:13:23 crc kubenswrapper[4766]: I0130 18:13:23.061221 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerStarted","Data":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} Jan 30 18:13:23 crc kubenswrapper[4766]: I0130 18:13:23.083539 4766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbhzj" podStartSLOduration=2.537404093 podStartE2EDuration="7.083516378s" podCreationTimestamp="2026-01-30 18:13:16 +0000 UTC" firstStartedPulling="2026-01-30 18:13:18.006814839 +0000 UTC m=+6652.644772185" lastFinishedPulling="2026-01-30 18:13:22.552927124 +0000 UTC m=+6657.190884470" observedRunningTime="2026-01-30 18:13:23.07810329 +0000 UTC m=+6657.716060646" watchObservedRunningTime="2026-01-30 18:13:23.083516378 +0000 UTC m=+6657.721473744" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.742210 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.743950 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:26 crc kubenswrapper[4766]: I0130 18:13:26.807919 4766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:27 crc kubenswrapper[4766]: I0130 18:13:27.149079 4766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.252107 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.253264 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbhzj" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" containerID="cri-o://52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" gracePeriod=2 Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.708269 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.823918 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.823999 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.824216 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") pod \"898542cd-ea0d-42c2-9988-ea4a384d8851\" (UID: \"898542cd-ea0d-42c2-9988-ea4a384d8851\") " Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.825034 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities" (OuterVolumeSpecName: "utilities") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.831992 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz" (OuterVolumeSpecName: "kube-api-access-z97sz") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "kube-api-access-z97sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.888351 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "898542cd-ea0d-42c2-9988-ea4a384d8851" (UID: "898542cd-ea0d-42c2-9988-ea4a384d8851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927153 4766 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927218 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97sz\" (UniqueName: \"kubernetes.io/projected/898542cd-ea0d-42c2-9988-ea4a384d8851-kube-api-access-z97sz\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:30 crc kubenswrapper[4766]: I0130 18:13:30.927232 4766 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/898542cd-ea0d-42c2-9988-ea4a384d8851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.025711 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-npbz4_ed5054c0-0009-40bb-8b4c-6e1a4da07b41/prometheus-operator/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128248 4766 generic.go:334] "Generic (PLEG): container finished" podID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" exitCode=0 Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128297 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128332 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbhzj" event={"ID":"898542cd-ea0d-42c2-9988-ea4a384d8851","Type":"ContainerDied","Data":"d556a6d4ba3b27fc2742e8c095741dd6d4af9660f74c660ee0fa4ba9a2509a03"} Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128333 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbhzj" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.128352 4766 scope.go:117] "RemoveContainer" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.148597 4766 scope.go:117] "RemoveContainer" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.162652 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.172000 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbhzj"] Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.189494 4766 scope.go:117] "RemoveContainer" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.224207 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-qm4dx_86dd422f-41b2-438f-9a62-e558efc71c90/prometheus-operator-admission-webhook/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.232590 4766 scope.go:117] "RemoveContainer" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233021 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": container with ID starting with 52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205 not found: ID does not exist" containerID="52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233070 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205"} err="failed to get container status \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": rpc error: code = NotFound desc = could not find container \"52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205\": container with ID starting with 52da343bdf4929f3fbe2d6f09dba0bcd967d568030291d355d7ed4ba26398205 not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233098 4766 scope.go:117] "RemoveContainer" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233463 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": container with ID starting with 89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f not found: ID does not exist" containerID="89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233512 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f"} err="failed to get container status \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": rpc error: code = NotFound desc = could not find container \"89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f\": container with ID starting with 89f99b5b5394067c00f4fb459376eb984dac95046ef36418ec7099a9db2cde0f not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233539 4766 scope.go:117] "RemoveContainer" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: E0130 18:13:31.233969 4766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": container with ID starting with 47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34 not found: ID does not exist" containerID="47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.233993 4766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34"} err="failed to get container status \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": rpc error: code = NotFound desc = could not find container \"47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34\": container with ID starting with 47b461760e05b7fdc2d23bd256c7ed70493f479d1dff51eeb7622cd1c26a6e34 not found: ID does not exist" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.280122 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-v5dzf_4e9a3cc5-7614-4db3-8c5b-590bff436549/prometheus-operator-admission-webhook/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.417789 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbt8s_ccbd3ff2-7dc6-488c-ae64-d0710464e20d/operator/0.log" Jan 30 18:13:31 crc kubenswrapper[4766]: I0130 18:13:31.460157 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-bgqzt_9f9dfe10-4d1d-4081-b3f3-4e7e4be37815/perses-operator/0.log" Jan 30 18:13:32 crc kubenswrapper[4766]: I0130 18:13:32.039202 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:32 crc kubenswrapper[4766]: E0130 18:13:32.039698 4766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ddhn5_openshift-machine-config-operator(0a25c516-3d8c-4fdb-9425-692ce650f427)\"" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" Jan 30 18:13:32 crc kubenswrapper[4766]: I0130 18:13:32.051189 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" path="/var/lib/kubelet/pods/898542cd-ea0d-42c2-9988-ea4a384d8851/volumes" Jan 30 18:13:43 crc kubenswrapper[4766]: I0130 18:13:43.040032 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.237760 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.298346 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7v5hl_f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873/kube-rbac-proxy/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.580719 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.695945 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7v5hl_f4f6fbd7-b3c4-4f9f-8689-6ef8bfffc873/controller/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.847359 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.856493 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.867083 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:44 crc kubenswrapper[4766]: I0130 18:13:44.930541 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.107616 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.107709 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.142387 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.163265 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.326590 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.327975 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.329968 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/cp-frr-files/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.333645 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/controller/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.490654 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/frr-metrics/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.542440 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/kube-rbac-proxy/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.584865 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/kube-rbac-proxy-frr/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.708228 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/reloader/0.log" Jan 30 18:13:45 crc kubenswrapper[4766]: I0130 18:13:45.845066 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-z9cbg_85bd5ff3-9577-4598-92a9-f24f00c56187/frr-k8s-webhook-server/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.117732 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d87dd9885-cpjtx_8f4ddea0-a380-401d-849f-6968d6d80e8b/manager/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.200396 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-698996dc4d-5ps7v_5aa43b8e-3f06-441e-ade0-264da132ec73/webhook-server/0.log" Jan 30 18:13:46 crc kubenswrapper[4766]: I0130 18:13:46.315903 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfspk_4ad0227f-0410-4f5e-bfc5-7dd96164c9b5/kube-rbac-proxy/0.log" Jan 30 18:13:47 crc kubenswrapper[4766]: I0130 18:13:47.190351 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfspk_4ad0227f-0410-4f5e-bfc5-7dd96164c9b5/speaker/0.log" Jan 30 18:13:48 crc kubenswrapper[4766]: I0130 18:13:48.055059 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fr242_4a563046-adc2-4e82-9b89-a549d3f06250/frr/0.log" Jan 30 18:13:57 crc kubenswrapper[4766]: I0130 18:13:57.622392 4766 patch_prober.go:28] interesting pod/oauth-openshift-6fffd54687-fl5rm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:13:57 crc kubenswrapper[4766]: I0130 18:13:57.623228 4766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6fffd54687-fl5rm" podUID="dfb08685-43c0-4cd6-bb82-51f5df825923" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:13:59 crc kubenswrapper[4766]: I0130 18:13:59.915768 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.153880 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.192495 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.237320 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.357407 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.377612 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.420839 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdvrhz_246ff80e-3711-4ffe-8fdb-0942844aef18/extract/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.611427 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.802357 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.810799 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.810930 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:00 crc kubenswrapper[4766]: I0130 18:14:00.996616 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/extract/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.012914 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.051041 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713ccdnn_7cde9372-207a-40f0-829b-1e0b5c662ec1/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.173313 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.336871 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.386556 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.388097 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.587247 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/util/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.617261 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/pull/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.642651 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e58gfcs_a2619907-b01e-44ad-99e7-a1ae313da017/extract/0.log" Jan 30 18:14:01 crc kubenswrapper[4766]: I0130 18:14:01.828660 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.016177 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.020280 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.042113 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.220662 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/extract/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.245073 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/pull/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.265016 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08jhlnz_1d8b8ccc-a37c-45d4-97e9-a3eb1bf7f951/util/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.401000 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.609524 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.612720 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.625620 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:02 crc kubenswrapper[4766]: I0130 18:14:02.991303 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.015613 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.175465 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.429795 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.449093 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.466880 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.581160 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xbqw6_8d7c1afe-4961-4d01-9513-635a558d6eba/registry-server/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.665905 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-utilities/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.683534 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/extract-content/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.899634 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rwhkx_2b001665-9e64-4f29-b35f-5f702206ae07/marketplace-operator/0.log" Jan 30 18:14:03 crc kubenswrapper[4766]: I0130 18:14:03.937521 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.215338 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.241940 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.248155 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.432437 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.501909 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.680850 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.883485 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8wb8_5bf71edb-8510-412d-95bd-028b90482ad1/registry-server/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.929536 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.939012 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:04 crc kubenswrapper[4766]: I0130 18:14:04.958164 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s94z_45931cc3-9fdc-43a0-bc52-7ac389c4f75b/registry-server/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.005254 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.186245 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-utilities/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.193069 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/extract-content/0.log" Jan 30 18:14:05 crc kubenswrapper[4766]: I0130 18:14:05.435273 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hxmkb_d84c1be7-4d75-42f5-a45d-cd83378aadca/registry-server/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.597968 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-npbz4_ed5054c0-0009-40bb-8b4c-6e1a4da07b41/prometheus-operator/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.714462 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-qm4dx_86dd422f-41b2-438f-9a62-e558efc71c90/prometheus-operator-admission-webhook/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.724006 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-946744c6d-v5dzf_4e9a3cc5-7614-4db3-8c5b-590bff436549/prometheus-operator-admission-webhook/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.836917 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbt8s_ccbd3ff2-7dc6-488c-ae64-d0710464e20d/operator/0.log" Jan 30 18:14:17 crc kubenswrapper[4766]: I0130 18:14:17.936692 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-bgqzt_9f9dfe10-4d1d-4081-b3f3-4e7e4be37815/perses-operator/0.log" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.168443 4766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169438 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169458 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169505 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169513 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: E0130 18:15:00.169534 4766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169542 4766 state_mem.go:107] "Deleted CPUSet assignment" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.169723 4766 memory_manager.go:354] "RemoveStaleState removing state" podUID="898542cd-ea0d-42c2-9988-ea4a384d8851" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.170502 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.172323 4766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.175675 4766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.185913 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323505 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323610 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.323680 4766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425750 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425828 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.425872 4766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.426754 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.431642 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.455659 4766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"collect-profiles-29496615-8pt8n\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.522720 4766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:00 crc kubenswrapper[4766]: I0130 18:15:00.994163 4766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n"] Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.948685 4766 generic.go:334] "Generic (PLEG): container finished" podID="f974fd4d-c161-41f5-b6c4-1466867ec240" containerID="d1176e19e2835c2856e4fcfcfc22dd8a9e0ab5466990c91977282d854b6f777e" exitCode=0 Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.948789 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerDied","Data":"d1176e19e2835c2856e4fcfcfc22dd8a9e0ab5466990c91977282d854b6f777e"} Jan 30 18:15:01 crc kubenswrapper[4766]: I0130 18:15:01.949210 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerStarted","Data":"6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16"} Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.351110 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499113 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499283 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.499353 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") pod \"f974fd4d-c161-41f5-b6c4-1466867ec240\" (UID: \"f974fd4d-c161-41f5-b6c4-1466867ec240\") " Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.501694 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume" (OuterVolumeSpecName: "config-volume") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.520354 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs" (OuterVolumeSpecName: "kube-api-access-z9vbs") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "kube-api-access-z9vbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.520473 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f974fd4d-c161-41f5-b6c4-1466867ec240" (UID: "f974fd4d-c161-41f5-b6c4-1466867ec240"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601368 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vbs\" (UniqueName: \"kubernetes.io/projected/f974fd4d-c161-41f5-b6c4-1466867ec240-kube-api-access-z9vbs\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601408 4766 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f974fd4d-c161-41f5-b6c4-1466867ec240-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.601421 4766 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f974fd4d-c161-41f5-b6c4-1466867ec240-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966424 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" event={"ID":"f974fd4d-c161-41f5-b6c4-1466867ec240","Type":"ContainerDied","Data":"6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16"} Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966728 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5bcf9cd16e2d7cda8a5933abdde2fa152bd8e4447a9b066e95a907939c0e16" Jan 30 18:15:03 crc kubenswrapper[4766]: I0130 18:15:03.966532 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-8pt8n" Jan 30 18:15:04 crc kubenswrapper[4766]: I0130 18:15:04.449144 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 18:15:04 crc kubenswrapper[4766]: I0130 18:15:04.461283 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-t4zn4"] Jan 30 18:15:06 crc kubenswrapper[4766]: I0130 18:15:06.052757 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5ff932-157e-49bf-9f1e-b4dc767de05e" path="/var/lib/kubelet/pods/1d5ff932-157e-49bf-9f1e-b4dc767de05e/volumes" Jan 30 18:15:36 crc kubenswrapper[4766]: I0130 18:15:36.264258 4766 scope.go:117] "RemoveContainer" containerID="2114380f0112baa1ec046121feaf5820547d68532f27b3cf3f25db273ce53dee" Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.373957 4766 generic.go:334] "Generic (PLEG): container finished" podID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" exitCode=0 Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.373999 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzpss/must-gather-w799p" event={"ID":"857930ca-2670-4ab4-ba29-ece210bd2af5","Type":"ContainerDied","Data":"776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8"} Jan 30 18:15:50 crc kubenswrapper[4766]: I0130 18:15:50.375133 4766 scope.go:117] "RemoveContainer" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" Jan 30 18:15:51 crc kubenswrapper[4766]: I0130 18:15:51.049567 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/gather/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.067454 4766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.068068 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vzpss/must-gather-w799p" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerName="copy" containerID="cri-o://fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" gracePeriod=2 Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.080306 4766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzpss/must-gather-w799p"] Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.465998 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/copy/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.467540 4766 generic.go:334] "Generic (PLEG): container finished" podID="857930ca-2670-4ab4-ba29-ece210bd2af5" containerID="fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" exitCode=143 Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.467594 4766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a83fa3d0421a4c02b1382ecdde5f2c954b6d5a37559a41d3ebe5dfe743483d" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.535868 4766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzpss_must-gather-w799p_857930ca-2670-4ab4-ba29-ece210bd2af5/copy/0.log" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.536238 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.646563 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") pod \"857930ca-2670-4ab4-ba29-ece210bd2af5\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.646671 4766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") pod \"857930ca-2670-4ab4-ba29-ece210bd2af5\" (UID: \"857930ca-2670-4ab4-ba29-ece210bd2af5\") " Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.653468 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc" (OuterVolumeSpecName: "kube-api-access-mrqvc") pod "857930ca-2670-4ab4-ba29-ece210bd2af5" (UID: "857930ca-2670-4ab4-ba29-ece210bd2af5"). InnerVolumeSpecName "kube-api-access-mrqvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.750316 4766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqvc\" (UniqueName: \"kubernetes.io/projected/857930ca-2670-4ab4-ba29-ece210bd2af5-kube-api-access-mrqvc\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.806732 4766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "857930ca-2670-4ab4-ba29-ece210bd2af5" (UID: "857930ca-2670-4ab4-ba29-ece210bd2af5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:15:59 crc kubenswrapper[4766]: I0130 18:15:59.860291 4766 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/857930ca-2670-4ab4-ba29-ece210bd2af5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 18:16:00 crc kubenswrapper[4766]: I0130 18:16:00.052953 4766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857930ca-2670-4ab4-ba29-ece210bd2af5" path="/var/lib/kubelet/pods/857930ca-2670-4ab4-ba29-ece210bd2af5/volumes" Jan 30 18:16:00 crc kubenswrapper[4766]: I0130 18:16:00.475776 4766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzpss/must-gather-w799p" Jan 30 18:16:09 crc kubenswrapper[4766]: I0130 18:16:09.044940 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:16:09 crc kubenswrapper[4766]: I0130 18:16:09.045631 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:16:36 crc kubenswrapper[4766]: I0130 18:16:36.339677 4766 scope.go:117] "RemoveContainer" containerID="fdd06e0bdd56096dd8720c76934293ed2794220f217e02573d2cd3ab6e769401" Jan 30 18:16:36 crc kubenswrapper[4766]: I0130 18:16:36.370194 4766 scope.go:117] "RemoveContainer" containerID="776a408dabef3cda5dfcce8b8d2f50984cb6bb6711550c6bec4c470e6ef1c7d8" Jan 30 18:16:39 crc kubenswrapper[4766]: I0130 18:16:39.045499 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:16:39 crc kubenswrapper[4766]: I0130 18:16:39.045891 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.044939 4766 patch_prober.go:28] interesting pod/machine-config-daemon-ddhn5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.045546 4766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.045588 4766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.046033 4766 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.046080 4766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" podUID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerName="machine-config-daemon" containerID="cri-o://0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363" gracePeriod=600 Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193442 4766 generic.go:334] "Generic (PLEG): container finished" podID="0a25c516-3d8c-4fdb-9425-692ce650f427" containerID="0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363" exitCode=0 Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193497 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerDied","Data":"0ede35b5fdca5259b34db0ae953855db165e425553aff7582713bfc641edf363"} Jan 30 18:17:09 crc kubenswrapper[4766]: I0130 18:17:09.193565 4766 scope.go:117] "RemoveContainer" containerID="d30d418df26d8fd4a7d7f4c28a838feebfa8adef2cda9408a1193bbf367cfb2a" Jan 30 18:17:10 crc kubenswrapper[4766]: I0130 18:17:10.204459 4766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ddhn5" event={"ID":"0a25c516-3d8c-4fdb-9425-692ce650f427","Type":"ContainerStarted","Data":"fd13e073ddfc7e30e655ecb7ad5c4e75009901f530223a8332344d3d9e5f1cc1"}